Conceptually, we used to insert the high-level packages and package
groups into the pipeline together with the expected repository
metadata checksum.
osbuild, using the dnf stage, would then fetch the metadata, verify
that its checksum is correct, compute the dependencies, and install
the packages.
Among the problems this has is that it made it impossible to cache
and share the resolved metadata as well as the rpms. Moreover,
as the checksum was at the repository-level, rather than at the
package level, it meant that we would refuse to build a pipeline
as soon as there were any changes at all to the repository, as we
could no longer guarantee the installed packages would be the same.
As of this patch, all repository and metadata handling is done by
composer, rather than osbuild. This means that the resolved metadata
can be cached between runs, which and it means that we can now
pin individual packages, rather than the entire repository. Meaning,
that as long as the rpms are still available, we are able to build
a pipeline.
The downloading of rpms is now done by a source helper in osbuild,
which means that they can be cached and shared between runs too.
One consequence of this change is that we resolve the location of
each rpm in composer, and pass that to the worker. As the worker
may not be in the same location, we do not want to use metalinks
in composer for this, as it would pin the repository closest to
composer, rather than the runner. Instead, we now manually select
a baseurl for each repository, which should be generally the
most useful one. Fedora helpfully provides such baseurls, so
this should work ok.
The most important thing to verify when checking this commit, is
that the image info in our test-cases remains unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This includes the packgase and build-packages used by each pipeline.
For now, this information is not used anywhere, but when we move
from dnf to rpm-based pipelines, this is what will be used instead
of the repo metadata checksum.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These tests are generated by regenerating each of the fedora-30 tests
with only the distro field changed to fedora-30.
```
for case in f30-*.json; do
cat $case | jq '.["compose-request"]' | jq '.distro = "fedora-31"' | sudo ./tools/generate-test-cases .osbuild | jq . | sponge f31-$case
done
``
Signed-off-by: Tom Gundersen <teg@jklm.no>
For tarballs, this is currently not supported, so no point in generating
the images in the first place. This will still be done during testing to
boot-test them.
Signed-off-by: Tom Gundersen <teg@jklm.no>
So far we only have f30, x86_64 images to be boot tested. In follow-ups
we expect to test all distros, all architectures and all image types
as boot tests.
And we also expect to do some sanity testing for all the blueprint
features we support without booting.
The AMI images can boot with an empty blueprint, the other image types
need an ssh key embedded in order to be able to connect and verify
that they booted successfully.
Signed-off-by: Tom Gundersen <teg@jklm.no>
No longer distinguish between the tests appart from their distro and
architecture from travis' point of view.
We must distinguish based on architecture, as we do not yet support
cross-architecture builds. We also split by distro as there is no
benefit to running tests for different distros on the same VM, as
they will not be able to share any chaches, so we might as well
parallellize them.
Tests that apply to the same distro/architecture combo are now
always run on the same VM as to utilize any caching we are able
to do.
Now that local_boot and empty_blueprint tests have been merged,
this will not increase the number of tests currently run on a
given VM, so should not affect the total running time of the
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As in 616b6250c7, add the needed
ssh key to the formerly empty blueprint, and use this test-case
for booting as well as pipeline generation verification.
For the ext4-partition image type, we also needed to add the
openssh-server package, as @core is not included by default.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives us more readable output. Both because it gives just a
diff, rather than the whole object as a string, but also as it
captures differences between the objects that thir string
representation does not.
In particular, if a field is an interface I, and T implements I,
then an object of type T and a pointer to the same object can both
be assigned to a variable of type I. Either way, the JSON
representation is the same, but the objects (correctly) do not
compare equal.
This is a pain to debug.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This commit makes the osbuild-image-tests binary doing the same set of tests
like the old test/run script.
Changes from test/run:
- qemu/nspawn are now killed gracefully. Firstly, SIGTERM is sent.
If the process doesn't exit till the timeout, SIGKILL is sent.
I changed this because nspawn leaves some artifacts behind when killed
by SIGKILL.
- the unsharing of network namespace now works differently because of
systemd issue #15079
Prior this commit it was possible to pass the CI checks even without added
files in vendor directory, because git diff doesn't check for unstaged
files. This commit fixes it.
Add the needed ssh key to the formerly empty blueprint, and use
this test-case for booting as well as pipeline generation
verification.
This merges the qemu-backed tests, the nspawn ones will be done
in a follow-up as they require more work.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This sorts the keys in the test case, but there is no behavioral
change.
This is in preparation for the cases being generated.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These cases are pointing to internal repos that have since changed. Drop them
until we have a better long-term story.
Our CI currently does not verify these cases, so this is not a behavioural
change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The test case json files will increase in complexity with the move from
dnf to json. They quantity of them will also continue to grow as new
distros, architectures, boot methods, image types, and blueprint
customizations become available. The generate-test-cases script
simplifies the process of creating new test cases. It accepts a compose
request and boot method as input and then uses osbuild-pipeline,
osbuild, and image-info to generate the test case.
[tomegun: some clean-ups and allow store to be reused]
Make the bluprint parameter a bool, and if set, then read a
blueprint from stdin, otherwise an empty blueprint is used.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A ComposeRequest is data used to submit a compose to the store, so it
should live in that package.
Remove the json marshalling test, because ComposeRequest is never
marshalled to JSON.
This will allow to use types from `distro` in the ComposeRequest struct.
The response is different for JSON and TOML requests. If it is JSON it
will always return a 200, but any blueprints with errors will be in the
errors list.
If TOML has an error it will return an error 400 with the error in a
standard API error response with status set to false.
The JSON and TOML parsers differ in how they handle an empty body so
check for a ContentLength of zero first and return a "Missing
blueprint" error to the client.
Includes updated tests for the JSON path, and new tests for empty TOML
blueprints.
If the blueprint doesn't exist, or the commit for the selected blueprint
doesn't exist it will return an error.
This also fixes the blueprints/undo/ route to return the correct error
to the caller.
If an unknown blueprint or workspace is deleted it will now return an
error.
Also fixes the blueprints DELETE handlers to return the correct error to
the client. Includes a new test.
These are done through github actions, which are much quicker. Leave
the image tests until they will be moved over to proper integration
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
There should be no need to run unit tests on specific architectures,
move it over to github-actions and rename "Lint" to "Checks" as it
is a bit more generic now.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This allows us to take advantage of the `testing` package. It also gives
the resulting test binary common command line arguments (same as `go
test`).
Tests need to be compiled with `go test -c`, which injects a `Main()`
that calls the Test* functions.
This is not supported by the golang rpm macros. Thus, build this binary
by calling `go test -c` directly, but taking care to pass the same
linker flags as the `%gobuild` macro.
Mark the test binary with the `integration` build constraint, so that
`go test ./...` doesn't pick them up. That's only for unit tests.
The idea is to move all other test binaries to this scheme as well.
Spec file changes by Lars Karlitski <lars@karlitski.net>
A developer may want to use the output of rpmmd (build package specs,
package specs, and checksums) instead of the pipeline manifest. In this
case they may pass the -rpmmd flag to osbuild-pipeline. With this flag,
instead of returning the pipeline, it will return the output of rpmmd.
Previously the order that changes were made to blueprints was not being
saved. I worked around this by sorting by timestamp, but it only has 1s
resolution so it is very likely to end up with changes having the same
timestamp, especially when running tests.
This adds a new variable to the Store, it is a list of the commit hashes
for each blueprint, in the order they were made.
Since this is a change to the Store schema the first time the new code
is run with the old store state it needs to populate the commit list, as
best it can, with the existing data. To do that it sorts the changes for
each blueprint by timestamp and version and saves this ordering into the
new BlueprintsCommits list.
A POST to this route will tag the latest commit of a blueprint as a new
revision. The revision numbers start at 1 and increment on each call.
If the latest commit has already been tagged it ignores the request.
The purpose of this documentation is to describe how the user is
expected to work with the RCM API. It can also serve as an example for
creating automation scripts if the RCM teams wants to create some.