This makes the queue more type safe and allows to get rid of the
`pendingChannel` and `pendingChannels` helpers, which only existed to
create not-yet-existing pending channels.
The enum is redundant information that can be deduced from the job's
times: queuedAt, startedAt, and finishedAt. Not having it reduces the
potential for inconsistent state.
Now that the "old" `jobqueue` package was renamed to `worker`, add a new
package that contains an interface to an actual job queue. Also add two
implementations: fsjobqueue, a job queue backed by the file system, and
testjobqueue, which can be used as a mock implementation for testing.
These packages are not yet used.
This package does not contain an actual queue, but a server and client
implementation for osbuild's worker API. Name it accordingly.
The queue is in package `store` right now, but about to be split off.
This rename makes the `jobqueue` name free for that effort.
The API is always advertising a content type of application/json, but
not sending JSON on errors.
Change it to send simple JSON objects like this:
{ "message": "something went wrong" }
This can be extended to include more structured information in the
future.
Also return an (for now) empty JSON object from `addJobHandler()`. It
returned nothing before, which is invalid JSON.
Stop testing for the actual error strings in `api_test.go`. These are
meant for humans and can change. Only check what a client could
meaningfully check for, which is only the HTTP status code right now.
Don't panic when the error message cannot be written to the connection.
Ignore it, because there's nothing we can do in this case. The standard
library has the same behavior.
This makes the jobqueue package independent of forking osbuild, the
choices for which (exact invocation, location of the cache directory)
should be made in the worker.
The usual style is three blocks: standard library, external
dependencies, internal dependencies.
Also run through `go fmt`, which sorts each of those blocks
alphabetically.
Similar to `weldr/json.go`, this file will contain structs that are
shared between client and server and only meant to be used to
serialize from and to JSON.
This immediate change allows us to make fields in `Job` private in a
future commit.
This moves the client code into the same package as the server code,
which makes it easier to change (and version) the two in sync. Also, it
will allow to make some structs private to the jobqueue package and to
test `Client`.
Also rename it to jobqueue.Client.
Rather than having to assume that we only ever produce one
artifact, have each upload target contain the filename it expects
to upload from the osbuild output.
An image file is always explicitly named in the manifest, and we
leave it up to each distro to decide how this is done, but the
convention is to use the same image filename as used when
downloading the image through weldr.
Now make this policy explicit, by quering the distro for the image
name and inserting it into each upload target.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A job's purpose is to build an osbuild manifest and upload the results
somewhere. It should not know about which distro was used to generate
the pipeline.
Workers depended on the distro package in two ways:
1. To set an osbuild `--build-env`. This is not necessary anymore in new
versions of osbuild. More importantly, it was wrong: it passed the
runner from the distro that is being built, instead of one that
matches the host.
This patch simply removes that logic.
2. To fetch the output filename with `Distro.FilenameFromType()`. While
that is useful, I don't think it warrants the dependency.
This patch uses the fact that all current pipelines output exactly
one file and uploads that. This should probably be extended in the
future to upload all output files, or to name them explicitly in the
upload target.
The worker should now compile to a smaller binary and do less
unnecessary work on startup (like reading repository files).
Only the weldr API has the concept of a default distro. Pass that distro
explicitly to `PushCompose()` and fetch the distro from the compose in
all other functions that accessed Store.Distro.
This is not a behavioral change, as all distros currently use
empty source objects. But when we move over to rpm-based pipelines,
this will change.
Make the same change to osbuild-pipeline, so these stay in sync.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Return errors from all distro's New() functions instead of logging and
returning nil. Also, return errors instead of panicking from
NewRegistry() and NewDefaultRegistry().
WithSingleDistro() doesn't follow go's naming convention for creating
objects (New*). Rename it to NewRegistry() and rename the old
NewRegistry() to NewDefaultRegistry().
The idea is that NewRegistry() can be used to create full Registry
objects from outside the package. NewDefaultRegistry() is a convenience
function that creates a Registry with all known distros.
The current `NewRegistry` implementation allows for nil values in the
map, but this leads to subtle bugs when using the registry. This patch
enforces non-nil values by introducing additional checks before we
insert the value into the map.
The change unfortunately breaks a lot of tests and therefore it is
necessary to create additional mock: distro.
The new mock is used instead of the previous "real" implementation,
which used to contain nil values.
This is unused for now, but will allow us to generate pipelines with
the pre-depsolved NEVRAs, so osbuild does not need to depsolve again.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rename the package from `pipeline` to `osbuild` to reflect that it
will no longer be specific to pipelines, but rather covers all
osbuild datatypes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Prior this commit local target copied the image from a worker to a composer
using cp(1) command. This prevented the local target to work on remote
workers.
This commit switches the local target implementation to using the jobqueue
API introduced in the previous commit. I had some concerns about speed
of this solution (imho nothing can beat pure cp(1) implementation) but
ad hoc sanity tests showed the copying of the image using the jobqueue API
when running the worker on the same machine as the composer is still
more or less instant.
Imho it makes more sense from REST perspective. Also, in the future there
will be ROUTE for uploading image to image build. As it's not a good idea
transport file inside JSON, all the parameters (compose id and image
build id) need to be inside the URL. Therefore for the sake of consistency,
all these routes should have compose id and image build id in the URL.
There is another solution to embedding multiple values inside http body
which allows file transport - multipart/form-data. I think using form-data
is worth when doing more complex stuff, for our usecase transporting all
the metadata in the URL is more appropriate solution.
Everything that this field contained can be computed in another way:
- path: just lookup the local target and read the path from there
- mime: can be derived from distribution and compose output type
- size: can be derived from the path
Therefore it imho doesn't make much sense to store these information multiple
times.
The compose will soon move to a concept of including multiple image
builds per one compose, we need to accommodate extra identifier to
handle this scenario.