The method is available in the Distro interface, but the distro does not
have the information needed to provide this kind of information any
more. The logic is now split into Arch and ImageType interfaces. This
patch will allow us to get rid of some old code and move forward.
The format of the BuildTime returned by /projects/list and /modules/list
does not include the 'Z' at the end. This fixed the format and adjusts
the tests.
The automatic local target is only needed when accessing the API via
weldr.
In the store, the target was only added when `stateDir` was not `nil`.
This is only used for testing which doesn't exercise the branch in
weldr. Thus, the same check is not needed there.
The following commit will introduce support for forced architecture in
dnf-json. The APIs already have this kind of information, so we can
simply pass it to the Depsolve and FetchMetadata functions.
Currently all image types are supported on all arches, but in the
future we may want to restrict this. In that case, return the
image types that are valid for the arch in qusetion, rather than
all the possible ones.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Check for errors and return early if they are found, rather than
check for the absence of errors.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
weldr needs to know the host architecture. Rather than pinning
a string, pin a real Arch object, and query its name when we
need it.
This verifies the validitiy of the architecture for the given
distro before it is passed to weldr, rather than lazily on
demand.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than having to assume that we only ever produce one
artifact, have each upload target contain the filename it expects
to upload from the osbuild output.
An image file is always explicitly named in the manifest, and we
leave it up to each distro to decide how this is done, but the
convention is to use the same image filename as used when
downloading the image through weldr.
Now make this policy explicit, by quering the distro for the image
name and inserting it into each upload target.
Signed-off-by: Tom Gundersen <teg@jklm.no>
For serializeation, make our own private structs. The structs
in the target package are not exactly the same as the ones used by
weldr, so in order to avoid too many compromises, let's just do
an explicity translation.
As a general principle, we aim to only use private types for
serialization and rather translate than reuse for different
purposes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The same types are used in the weldr API as internally. We want
to avoid sharing serialized types like this, as it easily leads
to layering vialotions.
For now just make the translation explicity, in a follow-up
we will introduce types dedicated to serialization in the weldr
API.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Mixing the way to build a distribution with where to get the source
packages from is wrong: it breaks pre-release repos, local mirrors, and
other use cases. To accommodate those, we introduced
`/etc/osbuild-composer/repositories`.
However, that doesn't work for the RCM API, which receives repository
URLs to use from outside requests. This API has been wrongly using the
`additionalRepos` parameter to inject those repos. That's broken,
because the resulting manifests contained both the installed repos and
the repos from the request.
To fix this, stop exposing repositories from the distros, but require
passing them on every call to `Manifest()`. This makes `additionalRepos`
redundant.
Fixes#341
Only the weldr API has the concept of a default distro. Pass that distro
explicitly to `PushCompose()` and fetch the distro from the compose in
all other functions that accessed Store.Distro.
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The response is different for JSON and TOML requests. If it is JSON it
will always return a 200, but any blueprints with errors will be in the
errors list.
If TOML has an error it will return an error 400 with the error in a
standard API error response with status set to false.
The JSON and TOML parsers differ in how they handle an empty body so
check for a ContentLength of zero first and return a "Missing
blueprint" error to the client.
Includes updated tests for the JSON path, and new tests for empty TOML
blueprints.
If the blueprint doesn't exist, or the commit for the selected blueprint
doesn't exist it will return an error.
This also fixes the blueprints/undo/ route to return the correct error
to the caller.
If an unknown blueprint or workspace is deleted it will now return an
error.
Also fixes the blueprints DELETE handlers to return the correct error to
the client. Includes a new test.
Previously the order that changes were made to blueprints was not being
saved. I worked around this by sorting by timestamp, but it only has 1s
resolution so it is very likely to end up with changes having the same
timestamp, especially when running tests.
This adds a new variable to the Store, it is a list of the commit hashes
for each blueprint, in the order they were made.
Since this is a change to the Store schema the first time the new code
is run with the old store state it needs to populate the commit list, as
best it can, with the existing data. To do that it sorts the changes for
each blueprint by timestamp and version and saves this ordering into the
new BlueprintsCommits list.
A POST to this route will tag the latest commit of a blueprint as a new
revision. The revision numbers start at 1 and increment on each call.
If the latest commit has already been tagged it ignores the request.
If the user creates a new blueprint with no version specified, the
blueprint struct uses "0.0.0" as the default version. Blueprint tests
for a blueprint with an empty version now expect no error.
This is not a behavioral change, as all distros currently use
empty source objects. But when we move over to rpm-based pipelines,
this will change.
Make the same change to osbuild-pipeline, so these stay in sync.
Signed-off-by: Tom Gundersen <teg@jklm.no>
If no packages are included in a blueprint, the slice remains `nil`,
which translates to `null` in json. Always initialize the slice by
pointing it to an empty array.
This adds returning errors from the store PushBlueprint* functions, and
adds handling of the errors to the API code in preparation for new code
to check the blueprint before saving it.
the name was misleading because the function could do more than just
download package list. In PushComposeRequest it is also used to fetch
checksums for the repositories, therefore I decided to rename it to
reflect this usage.
If the Epoch is > 0 the it should be added to the front of the version,
separated by a colon.
Also include a depsolve package with a non-zero Epoch and adjust the
tests accordingly.