Currently, if a TOML source is added with no name, or the source is
incorrectly inside a [section] it will add an empty source, causing
depsolving to crash.
This adds tests for 'name' and 'type' fields as a minimum requirement,
and returns an API error if they are empty or missing.
This also includes unit and integration tests.
Closes PR#462
The method is available in the Distro interface, but the distro does not
have the information needed to provide this kind of information any
more. The logic is now split into Arch and ImageType interfaces. This
patch will allow us to get rid of some old code and move forward.
The automatic local target is only needed when accessing the API via
weldr.
In the store, the target was only added when `stateDir` was not `nil`.
This is only used for testing which doesn't exercise the branch in
weldr. Thus, the same check is not needed there.
The following commit will introduce support for forced architecture in
dnf-json. The APIs already have this kind of information, so we can
simply pass it to the Depsolve and FetchMetadata functions.
Currently all image types are supported on all arches, but in the
future we may want to restrict this. In that case, return the
image types that are valid for the arch in qusetion, rather than
all the possible ones.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Check for errors and return early if they are found, rather than
check for the absence of errors.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
weldr needs to know the host architecture. Rather than pinning
a string, pin a real Arch object, and query its name when we
need it.
This verifies the validitiy of the architecture for the given
distro before it is passed to weldr, rather than lazily on
demand.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than having to assume that we only ever produce one
artifact, have each upload target contain the filename it expects
to upload from the osbuild output.
An image file is always explicitly named in the manifest, and we
leave it up to each distro to decide how this is done, but the
convention is to use the same image filename as used when
downloading the image through weldr.
Now make this policy explicit, by quering the distro for the image
name and inserting it into each upload target.
Signed-off-by: Tom Gundersen <teg@jklm.no>
For serializeation, make our own private structs. The structs
in the target package are not exactly the same as the ones used by
weldr, so in order to avoid too many compromises, let's just do
an explicity translation.
As a general principle, we aim to only use private types for
serialization and rather translate than reuse for different
purposes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Mixing the way to build a distribution with where to get the source
packages from is wrong: it breaks pre-release repos, local mirrors, and
other use cases. To accommodate those, we introduced
`/etc/osbuild-composer/repositories`.
However, that doesn't work for the RCM API, which receives repository
URLs to use from outside requests. This API has been wrongly using the
`additionalRepos` parameter to inject those repos. That's broken,
because the resulting manifests contained both the installed repos and
the repos from the request.
To fix this, stop exposing repositories from the distros, but require
passing them on every call to `Manifest()`. This makes `additionalRepos`
redundant.
Fixes#341
Only the weldr API has the concept of a default distro. Pass that distro
explicitly to `PushCompose()` and fetch the distro from the compose in
all other functions that accessed Store.Distro.
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The response is different for JSON and TOML requests. If it is JSON it
will always return a 200, but any blueprints with errors will be in the
errors list.
If TOML has an error it will return an error 400 with the error in a
standard API error response with status set to false.
The JSON and TOML parsers differ in how they handle an empty body so
check for a ContentLength of zero first and return a "Missing
blueprint" error to the client.
Includes updated tests for the JSON path, and new tests for empty TOML
blueprints.
If the blueprint doesn't exist, or the commit for the selected blueprint
doesn't exist it will return an error.
This also fixes the blueprints/undo/ route to return the correct error
to the caller.
If an unknown blueprint or workspace is deleted it will now return an
error.
Also fixes the blueprints DELETE handlers to return the correct error to
the client. Includes a new test.
Previously the order that changes were made to blueprints was not being
saved. I worked around this by sorting by timestamp, but it only has 1s
resolution so it is very likely to end up with changes having the same
timestamp, especially when running tests.
This adds a new variable to the Store, it is a list of the commit hashes
for each blueprint, in the order they were made.
Since this is a change to the Store schema the first time the new code
is run with the old store state it needs to populate the commit list, as
best it can, with the existing data. To do that it sorts the changes for
each blueprint by timestamp and version and saves this ordering into the
new BlueprintsCommits list.
A POST to this route will tag the latest commit of a blueprint as a new
revision. The revision numbers start at 1 and increment on each call.
If the latest commit has already been tagged it ignores the request.
If no packages are included in a blueprint, the slice remains `nil`,
which translates to `null` in json. Always initialize the slice by
pointing it to an empty array.
This adds returning errors from the store PushBlueprint* functions, and
adds handling of the errors to the API code in preparation for new code
to check the blueprint before saving it.
the name was misleading because the function could do more than just
download package list. In PushComposeRequest it is also used to fetch
checksums for the repositories, therefore I decided to rename it to
reflect this usage.
If the Epoch is > 0 the it should be added to the front of the version,
separated by a colon.
Also include a depsolve package with a non-zero Epoch and adjust the
tests accordingly.
Without making a deep copy of the blueprint the changes made to the
package and module versions will persist in memory, causing it to lose
the package and module version globs.
This can be seen by executing a freeze request and then a depsolve. The
blueprint included in the depsolve had the version globs replaced by the
frozen EVRA values.
This adds the modules to the list of package specs to be depsolved. It
includes a new function to build the version glob package string, as
well as tests for the new function and for depsolving with modules in
the blueprint.
This adds support for the modules field. It moves the version
replacement into a separate function, setPkgEVRA, and adds tests for the
new function as well as for blueprints with packages in both the
packages and modules lists.
The dependencies are not sorted, so depending on what order they were
returned in the freeze route would or would not return the correct
results (exhibited by the version being the original glob instead of the
EVRA).
This also fixes the tests so that the depsolve results are slightly
unsorted by adding a dep-package3 to the start of the list.
This is unused for now, but will allow us to generate pipelines with
the pre-depsolved NEVRAs, so osbuild does not need to depsolve again.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The packag selection shown in the UI does not include the base
packages that will be included in the image, and they cannot,
because the base packages depends on the output type, and the UI
packages shown in the UI are independent of the output type.
It is possible to select packages incompatible with the base
packages. Discover this sooner rather than later, by including
the base packages in the final depsolve before creating the
pipeline.
In the future the result of the depsolve will be used to create
the pipeline, so this is another prerequisite for moving from
the dnf to the rpm stage.
Also depsolve the build packages for the same reason. Note that we
always set clean to false in this case, as the depsolving of the
main packages would have performed any cleaning necessary.
Also extend dnf-json to support excluding packages from depsolving.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We must avoid depending on the host's state in any way. This achieves
isolation in the following ways:
- rather than the default config file /dev/null is used
- rather than sharing the host persistent state dir a temporary one
is used and thrown away for each call
- the module_platform_id is set explicitly per supported distro, rather
than taken from /etc/os-release.
Optionally, the cache directory can be configured, as we may want to keep
this separate from the host, if for no other reason than accounting.
However, the cache appears to be well-behaved, so we can keep sharing
it between calls (or even with the host). This speeds up things
considerably, so this is definitely what we want.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In our base distro definitions we exclude packages in addition to
including them. Extend dnf-json to support this, so we can depsolve
the base package set as well as the packages added in blueprints.
Signed-off-by: Tom Gundersen <teg@jklm.no>