This also changes the order in which the task is updated; the activity is
now saved first, so that it can be included in the task status change
notification sent to SocketIO clients.
Worker and Manager implementation of the "may-I-kee-running" protocol.
While running tasks, the Worker will ask the Manager periodically
whether it's still allowed to keep running that task. This allows the
Manager to abort commands on Workers when:
- the Worker should go to another state (typically 'asleep' or
'shutdown'),
- the task changed status from 'active' to something non-runnable
(typically 'canceled' when the job as a whole is canceled).
- the task has been assigned to a different Worker. This can happen when
a Worker loses its connection to its Manager, resulting in a task
timeout (not yet implemented) after which the task can be assigned to
another Worker. If then the connectivity is restored, the first Worker
should abort (last-assigned Worker wins).
Manager now sends out task updates via SocketIO, and the web interface
handles those.
Note that there is a `BroadcastTaskUpdate()` function, but not a
`BroadcastNewTask`. The 'new job' broadcast is sent after the job's
tasks have been created, and thus there is no need for a separate
broadcast per task.
Add `fetchJobTasks` operation to the Jobs API. This returns a summary of
each of the job's tasks, suitable for display in a task list view.
The actually used fields may need tweaking once we actually have a task
list view, but at least the functionality is there.
The password check of worker API calls was 2 orders of magnitude slower
than actually handling the API call itself. Since the Worker authentication
is not that important (it's all on the same network anyway, and Worker
account registration is automatic too), lowering the BCrypt cost to the
minimum helps.
On my machine, this reduces the time for password checks from 50 to 2 ms.
To prepare for job status changes being requestable from the API, store
the reason for any status change on the job itself.
Not yet part of the API, just on the persistence layer.
SQLite can return `SQLITE_BUSY` errors when it's doing too many things at
the same time. This is now improved a bit by setting a 5-second timeout,
during which the SQLite driver will wait for the database to become
available. If that doesn't happen, Flamenco Manager will return a
`503 Service Unavailable` response so that the client knows to back off
a little.
Flamenco Manager now has a "storage path" config option, which will be
used by Shaman if enabled. Now the `{jobs}` implicit variable will always
exist, its value depending on whether Shaman is enabled or not.
This allows the Blender add-on to submit jobs at path
`{jobs}/path/file.blend`. Due to the nature of the system, the add-on
doesn't know (and shouldn't know) where exactly the Manager has its
Shaman storage.
- Addon switches between filesystem-packing and Shaman-packing
automatically, depending on whether the Manager has Shaman enabled.
- Actually using BAT for Shaman packing.
It doesn't work though, some error occurs when receiving Shaman response
from the Manager in the Addon.
This introduces some more conceptual changes to Shaman. The most important
one is that there is no longer a "checkout ID", but a "checkout path".
The Shaman client can request any subpath of the checkout directory,
so that it can handle things like project- or scene-specific prefixes.
The add-on code was copy-pasted from other addons and used the GPL v2
license, whereas by accident the LICENSE text file had the GNU "Affero" GPL
license v3 (instead of regular GPL v3).
This is now all streamlined, and all code is licensed as "GPL v3 or later".
Furthermore, the code comments just show a SPDX License Identifier
instead of an entire license block.