Workers now send output produced by Blender (limited to PNG and JPEG
images, currently) to Manager. This is done by converting to JPEG first,
then sending the bytes via the Flamenco API to the Manager.
Add an abstract queue that outputs only the last-queued item.
Items that were queued before the last one will be dropped.
This will be used for handling the last-rendered images. Any images
that are rendered while an image is already uploading should be ignored,
except the last one.
Prepare the Worker for submission of last-rendered images to Manager, by
parsing `stdout` of Blender to see which files were saved.
This needs more work, as now just an error "not implemented" is logged.
The test can hang occasionally, and needs some love & attention. For now
I've done some patching to make it slightly better, but still disabled it
and added a `FIXME` note to it.
On Windows it's not allowed to erase a file while it's opened, which caused
this error to surface. The file is now properly closed before the test
file is erased.
Add a handler for the OpenAPI `taskOutputProduced` operation, and an
image thumbnailing goroutine.
The queue of images to process + the function to handle queued images
is managed by `last_rendered.LastRenderedProcessor`. This queue currently
simply allows 3 requests; this should be improved such that it keeps
track of the job IDs as well, as with the current approach a spammy job
can starve the updates from a more calm job.
The OpenAPI library we use for request validation needs to know per mime
type how to handle the contents. The same function for
`application/octet-stream` is now used for `image/png` and `image/jpeg`
as well.
Add a `local_storage` package that finds a suitable place to put files.
Currently it just looks at the location of the currently running
executable; it can later do other things. It can be queried for directory
to put job-specific files.
It is intended to be used by the under-development "last rendered output"
processing system, to store an image file per job. Later we should also
refactor the task log handling system to use this.
Add a function that can determine whether the given path is considered
a "root path". Considers `/`, `X:`, `X:\` and `X:/` root paths, where `X`
is any upper or lower case Latin letters.
If there is a `scripts` directory next to the current executable, load
scripts from that directory as well.
It is still required to restart the Manager in order to pick up changes
to those scripts (including new/removed files), PLUS a refresh in the
add-on.
When on-disk job compiler scripts are supported, they will be reloaded
often, and it becomes more important to have the access to the map of
loaded job compilers thread-safe.
The code was still using the old `visible: true/false` approach, which
was replaced with a visibility string. The GUI and job submission code now
use the same function to determine visibility.
The global `scriptFS` variable was too easy to access, which caused an
issue where the mandatory `"scripts"` subdirectory was not passed.
Accessing via a getter function that hides this requirement prevents this.
Take some functions out of the `Service` struct, as they are more or less
standalone anyway. This will also make it easier later to make things
thread-safe, as that'll become important when files can get live-reloaded.
The `Load()` function returns a `*Service`, and it was confusing that the
local variable is named `compiler` instead. Now it's called `service`.
No functional changes.
Refactor the JS script file loading code so that it's tied to the `fs.FS`
interface for longer, and less to the specifics of our `embed.FS` instance.
This should make it possible to use other filesystems, like a real on-disk
one, to load scripts.
Change "accepted CORS origins" to "acceptable CORS origins", as the former
is too ambiguous (it can mean "I just accepted these" or "These are the
acceptable ones").
When data is updated, resize columns in the job/task/worker tables. For
example, status change requests of Workers require more space, for example
going from `awake` to `awake → offline`.
Allow overriding the worker name by setting the `FLAMENCO_WORKER_NAME`
environment variable. This makes it easy to do from Docker configs, and,
more importantly, from the scripts I use to run multiple workers on the
same machine while developing Flamenco.