Document that the `X-Shaman-Original-Filename` HTTP header (used when
submitting files to the Shaman server) should either be ASCII or encoded
using RFC 2047.
No functional changes.
Replace the `queryJobs` operation (which takes a query object to do
filtering & pagination) with the new `fetchJobs` operation (which just
returns all jobs without filtering).
The reason for this is that the querying that this operation supported
was never used by the front-end. And now it's getting in the way of other
development work, so it has to go.
The setup assistant configuration can now contain `source: default` to
indicate that the default `blender` command should be used (which in
turn tells the Worker to find whatever Blender is available on the
system).
Choosing this option will make the setup assistant skip the path check,
and just trust that the Workers will find Blender.
Reviewed-on: https://projects.blender.org/studio/flamenco/pulls/104306
Reviewed-by: Sybren A. Stüvel <sybren@blender.org>
The OpenAPI changes in e561c8080e8f47b6fe792591033be4fab6faef42 did not
sit well with the generated Python code. The worker tag is now
communicated just as UUID (so that it's the same for the `SubmittedJob`
and `Job` types).
Events were previously only sent via SocketIO, but now they can also be
sent via MQTT. These are now renamed from `SocketIO…` to `Event…`.
There is still the SocketIO subscription mechanism, for which the types are
still prefixed with `SocketIO`. MQTT manages subscription on the broker,
not on Flamenco Manager itself, for now this will remain SocketIO-only
functionality.
The data is still the same, but the names of the properties have changed
a bit so that they're more generic, declarative, instead of specific to
one bit of functionality.
The goal is to have the `evalInfo.description` field usable for the
'evaluate now' button in the add-on as well. That way it should be
clearer what that does.
This commit just updates the OpenAPI definition.
Refactor the job settings. The `autoevalLockable` boolean is now
replaced with an `evalOnSubmit` nested object:
```
evalOnSubmit: {
showButton: true,
placeholder: "Scene frame range",
},
```
This makes it possible to add a placeholder text, and later maybe some
other parameters. The `showButton: true` part always has to be there, as
the entire feature is disabled with `showButton: false`, in which case
it's better to just remove the entire `evalOnSubmit` sub-object
altogether. Still, I think it's preferred to have that `showButton:
true` in there, as it makes it more explicit what this section of the
settings is for.
This commit just contains the OpenAPI definition.
Add a new job setting option `autoevalLockable`. Setting this to `true` in
the job compiler's `JOB_TYPE` settings has the following effect:
- By default, the setting will not be editable in Blender's job submission
interface. Instead, a toggle button with a 'car' icon will be shown.
- When the 'car' button is toggled off, the setting becomes editable again.
In its default, uneditable state, the setting will be auto-evaluated before
submission.
This makes it possible to 'lock in' auto-evaluation. The main use case is
for the frame range of the render job. By default this will be locked to
the scene frame range, but it can still be overridden if a different
range is wanted.
This commit just contains the necessary OpenAPI change.
Make it explicit that the `version` property is for human consumption.
Also add a new `git` property so that all info from `version` is also
included in separate fields for machine consumption.
Remove the following statuses from `flamenco-openapi.yaml`:
- 'construction-failed'
- 'archiving'
- 'archived'
These were a leftover from Flamenco v2 and have never been used in
Flamenco v3.
Reviewed-on: https://projects.blender.org/studio/flamenco/pulls/104215
Clusters can be created without UUID now. In that case, a random one will
be generated. The cluster will be returned by the creation call, so that
the caller can know that generated UUID.
Worker Clusters can be managed via the API, workers can be assigned to
any number of clusters (if not assigned to any, they'll pick up any task).
Jobs can be submitted with a cluster ID, in which case only workers that
are in that cluster or are clusterless will pick up its tasks.