After the queue rework, TestCompose() was the only user of
UpdateImageBuildInCompose() and Compose.UpdateState(). Set all required
fields in TestCompose() directly instead of going through those.
The store is responsible for two things: user state and the compose queue. This
is problematic, because the rcm API has slightly different semantics from weldr
and only used the queue part of the store. Also, the store is simply too
complex.
This commit splits the queue part out, using the new jobqueue package in both
the weldr and the rcm package. The queue is saved to a new directory `queue/`.
The weldr package now also has access to a worker server to enqueue and list
jobs. Its store continues to track composes, but the `QueueStatus` for each
compose (and image build) is deprecated. The field in `ImageBuild` is kept for
backwards compatibility for composes which finished before this change, but a
lot of code dealing with it in package compose is dropped.
store.PushCompose() is degraded to storing a new compose. It should probably be
renamed in the future. store.PopJob() is removed.
Job ids are now independent of compose ids. Because of that, the local
target gains ComposeId and ImageBuildId fields, because a worker cannot
infer those from a job anymore. This also necessitates a change in the
worker API: the job routes are changed to expect that instead of a
(compose id, image build id) pair. The route that accepts built images
keeps that pair, because it reports the image back to weldr.
worker.Server() now interacts with a job queue instead of the store. It gains
public functions that allow enqueuing an osbuild job and getting its status,
because only it knows about the specific argument and result types in the job
queue (OSBuildJob and OSBuildJobResult). One oddity remains: it needs to report
an uploaded image to weldr. Do this with a function that's passed in for now,
so that the dependency to the store can be dropped completely.
The rcm API drops its dependencies to package blueprint and store, because it
too interacts only with the worker server now.
Fixes#342
Code that's calling PushCompose() had to depsolve packages and fetch the
right ImageType from a distro, but not create the osbuild manifest. That
was left for PushCompose to do. Move it out of there to the callers, so
that the store is mainly concerned with storing things.
This also simplifies the argument list of PushCompose().
Yes. This goes against my desire not to change code to accommodate
tests. But there is no other good way to test compose results without
long running, and possibly fragile, composes. And this matches lorax's
behavior.
The change adds support for the ?test=1|2 query parameter to the compose
POST, and a new store function - PushTestCompose that handles creating
the fake compose results.
Passing ?test=1 will create a failed compose. Passing ?test=2 will
create a successful compose, but one without any files.
The purpose of these is to be able to test the compose result API
responses like compose/failed, etc.
If you don't set the time it ends up being the default Go time which is
1/1/1 and when you convert that using UnixNano() you get a nice big
negative number (-6795364578871345152) which then eventually shows up in
the queue, finished, and failed output as 'Fri Aug 30 17:47:39 1754'
and since the Time Travel feature is not yet complete this is
impossible.
The fix is to set it when the job is started.
If a user creates a compose with a size of 0, the default image size for
the image type should be used. Also, certain image types have
requirements for the image size. In order to ensure that the proper image
size is stored in the compose object, the compose's ImageBuild object
uses ImageType.Size() to get the correct image size for the image type.
weldr's store is quite complex and handled serialization itself. Move
that part out into a separate package `jsondb`.
This package is more generic than the store needs: it can write an
arbitrary amount of JSON documents while the store only needs one:
state.json. This is in preparation for future work, which introduces a
queue package that builds on top of `jsondb`.
TOML output of the source wasn't using the correct field names or case.
This adds toml tags to the weldr.SourceConfigV0, and store.SourceConfig
structs.
The weldr API already passes custom repositories, so they end up in the
list twice. This is a bit awkward, because weldr passes something to the
store that the store already knows about. It only works this way right
now though, because the rcm API doesn't need these repositories at all.
It expects all repositories to be passed.
If a compose was interrupted by restarting osbuild-composer it should be
set to failed at startup. This was not working because imgBuild is a
temporary variable, the value stored in ImageBuild needs to be modified
directly.
According to the new guidelines in docs/errors.md.
Note that this does not include code that marshals to a writer that
might fail (when a connection drops, for example).
The automatic local target is only needed when accessing the API via
weldr.
In the store, the target was only added when `stateDir` was not `nil`.
This is only used for testing which doesn't exercise the branch in
weldr. Thus, the same check is not needed there.
Rather than having to assume that we only ever produce one
artifact, have each upload target contain the filename it expects
to upload from the osbuild output.
An image file is always explicitly named in the manifest, and we
leave it up to each distro to decide how this is done, but the
convention is to use the same image filename as used when
downloading the image through weldr.
Now make this policy explicit, by quering the distro for the image
name and inserting it into each upload target.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A job's purpose is to build an osbuild manifest and upload the results
somewhere. It should not know about which distro was used to generate
the pipeline.
Workers depended on the distro package in two ways:
1. To set an osbuild `--build-env`. This is not necessary anymore in new
versions of osbuild. More importantly, it was wrong: it passed the
runner from the distro that is being built, instead of one that
matches the host.
This patch simply removes that logic.
2. To fetch the output filename with `Distro.FilenameFromType()`. While
that is useful, I don't think it warrants the dependency.
This patch uses the fact that all current pipelines output exactly
one file and uploads that. This should probably be extended in the
future to upload all output files, or to name them explicitly in the
upload target.
The worker should now compile to a smaller binary and do less
unnecessary work on startup (like reading repository files).
Now that `Store.PushCompose()` takes a `Distro` as argument, the rcm API
can use that function as well. This moves them both through the same
code path, reducing duplication.
Remove `PushComposeRequest()` and the corresponding struct. It was
supposed to allow composes with multiple output types and architectures,
but that was not yet implemented. Merging the two now simplifies moving
the compose queue out of the store in a future commit, which will then
tackle multi-image-type composes as well.
Only the weldr API has the concept of a default distro. Pass that distro
explicitly to `PushCompose()` and fetch the distro from the compose in
all other functions that accessed Store.Distro.
In the post-dnf-stage world, `Distro.Manifest` expects the full list of
depsolved packages. This is similar to what weldr does, but much
simpler, because the rcm API only cares about base packages.
`ComposeRequest` included a `common.Distribution`, which had to be
resolved in PushComposeRequest. Use a real `distro.Distro` object here,
and push resolving it to the rcm package.
Change the `Distribution` on the (lower-case) `composeRequest` to a
string. This struct represents the incoming request. Since we're now
resolving the real distro object from the registry in the same function,
it seems redundant to validate the incoming distro twice.
This makes two changes simultaneously, to avoid too much churn:
- move accessors from being on the blueprint struct to the
customizations struct, and
- pass the customizations struct rather than the whole blueprint
as argumnet to distro.Manifest().
@larskarlitski pointed out in a previous review that it feels
redundant to pass the whole blueprint as well as the list of
packages to the Manifest funciton. Indeed it is, so this
simplifies things a bit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This was never actually used anywhere, as passing it to dnf-json
was a noop.
We may want to reconsider the concept of a source/repo name and
how it differs from an ID, but for now drop the name.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A ComposeRequest is data used to submit a compose to the store, so it
should live in that package.
Remove the json marshalling test, because ComposeRequest is never
marshalled to JSON.
This will allow to use types from `distro` in the ComposeRequest struct.
If the blueprint doesn't exist, or the commit for the selected blueprint
doesn't exist it will return an error.
This also fixes the blueprints/undo/ route to return the correct error
to the caller.
If an unknown blueprint or workspace is deleted it will now return an
error.
Also fixes the blueprints DELETE handlers to return the correct error to
the client. Includes a new test.
Previously the order that changes were made to blueprints was not being
saved. I worked around this by sorting by timestamp, but it only has 1s
resolution so it is very likely to end up with changes having the same
timestamp, especially when running tests.
This adds a new variable to the Store, it is a list of the commit hashes
for each blueprint, in the order they were made.
Since this is a change to the Store schema the first time the new code
is run with the old store state it needs to populate the commit list, as
best it can, with the existing data. To do that it sorts the changes for
each blueprint by timestamp and version and saves this ordering into the
new BlueprintsCommits list.
This is not a behavioral change, as all distros currently use
empty source objects. But when we move over to rpm-based pipelines,
this will change.
Make the same change to osbuild-pipeline, so these stay in sync.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This changes osbuild-composer's behavior to match lorax-composer when
encountering invalid versions. Instead of leaving them as-is it will
return a BlueprintError explaining the problem. eg.
"errors": [
{
"id": "BlueprintsError",
"msg": "Invalid 'version', must use Semantic Versioning: is not in dotted-tri format"
}
]
This is enforced on new blueprints (including the workspace). If a
previously stored blueprint has an invalid version and a new one is
pushed it will use the new version number instead of trying to bump the
invalid one.
This also moves the version bump logic into blueprint instead of store,
and adds an Initialize function that will make sure that the blueprint
has sane default values for any missing fields.
This includes tests for the Initialize and BumpVersion functions.
This adds returning errors from the store PushBlueprint* functions, and
adds handling of the errors to the API code in preparation for new code
to check the blueprint before saving it.
This is needed for unit tests, because it wasn't possible to mock the
rpmmd module before. This also requires that the checksum is moved to
the compose request and evaluated in the endpoint handler instead of
push compose. I think it makes sense to have the checksum in the compose
request directly.
Also a "module platform ID" is required now, but we don't have the
"global" distribution any more, so this patch introduces mapping from a
distribution to the module platform ID.
the name was misleading because the function could do more than just
download package list. In PushComposeRequest it is also used to fetch
checksums for the repositories, therefore I decided to rename it to
reflect this usage.
This is unused for now, but will allow us to generate pipelines with
the pre-depsolved NEVRAs, so osbuild does not need to depsolve again.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rename the package from `pipeline` to `osbuild` to reflect that it
will no longer be specific to pipelines, but rather covers all
osbuild datatypes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When support for osbuild result was added into osbuild-composer it was in
a bit hacky way - localtarget's location was reused as a path for the
result. This didn't make much sense because we want to store the result
even when image build has no localtarget.
Several past commits made store less dependant on the localtarget. The
responsibility for "holding the paths" to build artifacts was gradually
switched from the localtarget to the store while still maintaining
backwards compatibility - localtarget.Location still pointed at the
correct location.
This commit finishes the switch: local target now has no Location field.
The store is now fully responsible for managing the artifacts and paths
to them. LocalTarget is now just a simple "switch" - if image build has it,
then worker uploads an image into the store and it's then available for
download using the weldr API.
In #221 Compose was refactored: Now it can have multiple image builds. More
image builds result in more jobs. Each job has its own result (logs from
osbuild). Additionally, also targets are now a part of image build. With
local target this effectively means we can have multiple images per compose.
However, these artifacts (images & results) were stored only per compose
prior this commit, thus rendering the behaviour of composes with multiple
image builds undefined and racy.
This commit fixes it by storing all the artifacts per image build instead of
per compose. To achieve this feature, getComposeDirectory and
getImageBuildDirectory methods were created to centralize the path assembly.
Paths to artifacts prior this commit:
${COMPOSER_STATE_DIR}/outputs/${COMPOSE_ID}/*
Paths to artifacts after this commit:
${COMPOSER_STATE_DIR}/outputs/${COMPOSE_ID}/${IMAGE_BUILD_ID}/*
Everything that this field contained can be computed in another way:
- path: just lookup the local target and read the path from there
- mime: can be derived from distribution and compose output type
- size: can be derived from the path
Therefore it imho doesn't make much sense to store these information multiple
times.