This adds a `-wizard` CLI option to the Manager, which opens a webbrowser
and shows the First-Time Wizard to aid in configuration of Flamenco.
This is work in progress. The wizard is just one page, and doesn't save
anything yet to the configuration.
Manager always creates an implicit variable `{jobs}`. This used to be
Shaman-dependent, but now it's always there (has been for a while). This
is now reflected in an add-on comment, and in an extra unit test.
The task logs storage system is refactored to use the `local_storage`
package. Configuration options have also changed:
- `task_logs_path` is renamed to `local_manager_storage_path`, to
emphasise that only the Manager deals with those files, with default
value `./flamenco-manager-storage`.
- `storage_path` is renamed to `shared_storage_path`, to emphasise this
is the storage shared between Manager and Workers, with default value
`./flamenco-shared-storage`.
Task logs are still stored in
`${local_manager_storage_path}/job-{jobUUID[0:4]}/{jobUUID}/task-{taskUUID}.txt`
Manifest task: T99409
Various config sections were commented out, because they were brought in
from Flamenco 2 but weren't implemented yet. These have now been removed,
as the basic functionality is there, and new functionality will likely
be different from Flamenco 2 anyway.
In the `simple-blender-render` job type settings, hide the `chunk_size`
setting from the web frontend, and show the `blendfile` setting instead.
The actual blend file being rendered is important to know, whereas the
chunk size can be inferred from the task names anyway.
Add a "Last Rendered" view to the webapp.
The Manager now stores (in the database) which job was the last
recipient of a rendered image, and serves that to the appropriate
OpenAPI endpoint.
A new SocketIO subscription + accompanying room makes it possible for
the web interface to receive all rendered images (if they survive the
queue, which discards images when it gets too full).
After processing an image in the "last-rendered" processor, a SocketIO
object is sent to clients to indicate the last-rendered image needs to
be (re)loaded.
This also moves the previously existing "done callback" from a single
function to a per-image callback, so that it can be called with the
right information in there, and only when that particular image is
actually done processing.
The notification message sent via SocketIO also contains the necessary
info to render the image, so that the web client doesn't have to call
the `fetchJobLastRenderedInfo` operation.
`os.IsNotExist()` is from before `errors.Is()` existed. The latter is the
recommended approach, as it also recognised wrapped errors.
No functional changes, except for recognising more cases of "does not
exist" errors as such.
On Windows it's not allowed to erase a file while it's opened, which caused
this error to surface. The file is now properly closed before the test
file is erased.
Add a handler for the OpenAPI `taskOutputProduced` operation, and an
image thumbnailing goroutine.
The queue of images to process + the function to handle queued images
is managed by `last_rendered.LastRenderedProcessor`. This queue currently
simply allows 3 requests; this should be improved such that it keeps
track of the job IDs as well, as with the current approach a spammy job
can starve the updates from a more calm job.
Add a `local_storage` package that finds a suitable place to put files.
Currently it just looks at the location of the currently running
executable; it can later do other things. It can be queried for directory
to put job-specific files.
It is intended to be used by the under-development "last rendered output"
processing system, to store an image file per job. Later we should also
refactor the task log handling system to use this.
If there is a `scripts` directory next to the current executable, load
scripts from that directory as well.
It is still required to restart the Manager in order to pick up changes
to those scripts (including new/removed files), PLUS a refresh in the
add-on.
When on-disk job compiler scripts are supported, they will be reloaded
often, and it becomes more important to have the access to the map of
loaded job compilers thread-safe.
The global `scriptFS` variable was too easy to access, which caused an
issue where the mandatory `"scripts"` subdirectory was not passed.
Accessing via a getter function that hides this requirement prevents this.
Take some functions out of the `Service` struct, as they are more or less
standalone anyway. This will also make it easier later to make things
thread-safe, as that'll become important when files can get live-reloaded.
The `Load()` function returns a `*Service`, and it was confusing that the
local variable is named `compiler` instead. Now it's called `service`.
No functional changes.
Refactor the JS script file loading code so that it's tied to the `fs.FS`
interface for longer, and less to the specifics of our `embed.FS` instance.
This should make it possible to use other filesystems, like a real on-disk
one, to load scripts.
Move the code related to task updates from workers to
`worker_task_updates.go`. It's going to get more complex with the
blocklisting in there; this prepares for that.
No functional changes.
When a job or task gets requeued from the web interface, its task
failure lists (i.e. the list of workers that previously failed this
task) will be cleared.
This clearing doesn't happen in other situations, e.g. when a worker
signs off and its task gets requeued, the task's failure list will
remain as-is.
Rename functions `onTaskStatusX` to `updateJobOnTaskStatusX` to clarify
their responsibility is to update the job in reaction to a task status
change.
No functional changes.