In the current PRs we have an issue with the linter failing on a dead code
in integration tests. The linter isn't failing on main_test.go file
because it somehow ignores the integration tag. However, it fails on
other files which main_test.go depends on and which have the integration
tag.
This commit tells golangci-lint to always use the integration tag while doing
the inspection.
There's now nothing preventing us from using the newer and better Go
implementation of image tests.
Also, the travis config was simplified to be more readable.
By default, image test executable runs only test cases for the same distro
as the host's one. On Travis there's Ubuntu, so we need to adjust the
behaviour and run the cases for a distro specified by command line
arguments.
We need to use different values for path constants when running the tests
on the Travis CI. This is the first step to achieve this.
Note that this commit may be reverted when Travis CI is dropped.
Use $STATE_DIRECTORY environment variable which is set by systemd
because we use: StateDirectory=osbuild-composer in the service unit.
also change systemd unit to include STATE_DIRECTORY, because
RHEL comes with older systemd version, so we need to set this variable explicitly.
This makes two changes simultaneously, to avoid too much churn:
- move accessors from being on the blueprint struct to the
customizations struct, and
- pass the customizations struct rather than the whole blueprint
as argumnet to distro.Manifest().
@larskarlitski pointed out in a previous review that it feels
redundant to pass the whole blueprint as well as the list of
packages to the Manifest funciton. Indeed it is, so this
simplifies things a bit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This was never actually used anywhere, as passing it to dnf-json
was a noop.
We may want to reconsider the concept of a source/repo name and
how it differs from an ID, but for now drop the name.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This will eventually replace the remote_location property. The latter
pins a specific location (a specific mirror), but the two former
can together be used to re-resolve to a more suitable mirror at the
time/place the package will actually be downloaded.
Rather than pinning mirrors in the osbuild manifests, we want to be
able to include the metalink and relative locations so each worker
can use mirrors closer to them.
This would be particularly important when pipelines are rebuilt in
the future, and the best mirrors may have changed.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
s390x uses LXC, we need to be run in a real VM for the osbuild tests
to work.
We did not yet have s390x support anyway, so these were noops.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It is no longer necesary to install yum, nor use a build environment
even though we are running this in ubuntu VMs. The rpm stage needed
by the build pipeline works just fine on stock ubuntu.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Conceptually, we used to insert the high-level packages and package
groups into the pipeline together with the expected repository
metadata checksum.
osbuild, using the dnf stage, would then fetch the metadata, verify
that its checksum is correct, compute the dependencies, and install
the packages.
Among the problems this has is that it made it impossible to cache
and share the resolved metadata as well as the rpms. Moreover,
as the checksum was at the repository-level, rather than at the
package level, it meant that we would refuse to build a pipeline
as soon as there were any changes at all to the repository, as we
could no longer guarantee the installed packages would be the same.
As of this patch, all repository and metadata handling is done by
composer, rather than osbuild. This means that the resolved metadata
can be cached between runs, which and it means that we can now
pin individual packages, rather than the entire repository. Meaning,
that as long as the rpms are still available, we are able to build
a pipeline.
The downloading of rpms is now done by a source helper in osbuild,
which means that they can be cached and shared between runs too.
One consequence of this change is that we resolve the location of
each rpm in composer, and pass that to the worker. As the worker
may not be in the same location, we do not want to use metalinks
in composer for this, as it would pin the repository closest to
composer, rather than the runner. Instead, we now manually select
a baseurl for each repository, which should be generally the
most useful one. Fedora helpfully provides such baseurls, so
this should work ok.
The most important thing to verify when checking this commit, is
that the image info in our test-cases remains unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This includes the packgase and build-packages used by each pipeline.
For now, this information is not used anywhere, but when we move
from dnf to rpm-based pipelines, this is what will be used instead
of the repo metadata checksum.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These tests are generated by regenerating each of the fedora-30 tests
with only the distro field changed to fedora-30.
```
for case in f30-*.json; do
cat $case | jq '.["compose-request"]' | jq '.distro = "fedora-31"' | sudo ./tools/generate-test-cases .osbuild | jq . | sponge f31-$case
done
``
Signed-off-by: Tom Gundersen <teg@jklm.no>
For tarballs, this is currently not supported, so no point in generating
the images in the first place. This will still be done during testing to
boot-test them.
Signed-off-by: Tom Gundersen <teg@jklm.no>
So far we only have f30, x86_64 images to be boot tested. In follow-ups
we expect to test all distros, all architectures and all image types
as boot tests.
And we also expect to do some sanity testing for all the blueprint
features we support without booting.
The AMI images can boot with an empty blueprint, the other image types
need an ssh key embedded in order to be able to connect and verify
that they booted successfully.
Signed-off-by: Tom Gundersen <teg@jklm.no>
No longer distinguish between the tests appart from their distro and
architecture from travis' point of view.
We must distinguish based on architecture, as we do not yet support
cross-architecture builds. We also split by distro as there is no
benefit to running tests for different distros on the same VM, as
they will not be able to share any chaches, so we might as well
parallellize them.
Tests that apply to the same distro/architecture combo are now
always run on the same VM as to utilize any caching we are able
to do.
Now that local_boot and empty_blueprint tests have been merged,
this will not increase the number of tests currently run on a
given VM, so should not affect the total running time of the
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As in 616b6250c7, add the needed
ssh key to the formerly empty blueprint, and use this test-case
for booting as well as pipeline generation verification.
For the ext4-partition image type, we also needed to add the
openssh-server package, as @core is not included by default.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives us more readable output. Both because it gives just a
diff, rather than the whole object as a string, but also as it
captures differences between the objects that thir string
representation does not.
In particular, if a field is an interface I, and T implements I,
then an object of type T and a pointer to the same object can both
be assigned to a variable of type I. Either way, the JSON
representation is the same, but the objects (correctly) do not
compare equal.
This is a pain to debug.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This commit makes the osbuild-image-tests binary doing the same set of tests
like the old test/run script.
Changes from test/run:
- qemu/nspawn are now killed gracefully. Firstly, SIGTERM is sent.
If the process doesn't exit till the timeout, SIGKILL is sent.
I changed this because nspawn leaves some artifacts behind when killed
by SIGKILL.
- the unsharing of network namespace now works differently because of
systemd issue #15079
Prior this commit it was possible to pass the CI checks even without added
files in vendor directory, because git diff doesn't check for unstaged
files. This commit fixes it.
Add the needed ssh key to the formerly empty blueprint, and use
this test-case for booting as well as pipeline generation
verification.
This merges the qemu-backed tests, the nspawn ones will be done
in a follow-up as they require more work.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This sorts the keys in the test case, but there is no behavioral
change.
This is in preparation for the cases being generated.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These cases are pointing to internal repos that have since changed. Drop them
until we have a better long-term story.
Our CI currently does not verify these cases, so this is not a behavioural
change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The test case json files will increase in complexity with the move from
dnf to json. They quantity of them will also continue to grow as new
distros, architectures, boot methods, image types, and blueprint
customizations become available. The generate-test-cases script
simplifies the process of creating new test cases. It accepts a compose
request and boot method as input and then uses osbuild-pipeline,
osbuild, and image-info to generate the test case.
[tomegun: some clean-ups and allow store to be reused]
Make the bluprint parameter a bool, and if set, then read a
blueprint from stdin, otherwise an empty blueprint is used.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A ComposeRequest is data used to submit a compose to the store, so it
should live in that package.
Remove the json marshalling test, because ComposeRequest is never
marshalled to JSON.
This will allow to use types from `distro` in the ComposeRequest struct.
The response is different for JSON and TOML requests. If it is JSON it
will always return a 200, but any blueprints with errors will be in the
errors list.
If TOML has an error it will return an error 400 with the error in a
standard API error response with status set to false.
The JSON and TOML parsers differ in how they handle an empty body so
check for a ContentLength of zero first and return a "Missing
blueprint" error to the client.
Includes updated tests for the JSON path, and new tests for empty TOML
blueprints.
If the blueprint doesn't exist, or the commit for the selected blueprint
doesn't exist it will return an error.
This also fixes the blueprints/undo/ route to return the correct error
to the caller.