It is no longer necesary to install yum, nor use a build environment
even though we are running this in ubuntu VMs. The rpm stage needed
by the build pipeline works just fine on stock ubuntu.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Conceptually, we used to insert the high-level packages and package
groups into the pipeline together with the expected repository
metadata checksum.
osbuild, using the dnf stage, would then fetch the metadata, verify
that its checksum is correct, compute the dependencies, and install
the packages.
Among the problems this has is that it made it impossible to cache
and share the resolved metadata as well as the rpms. Moreover,
as the checksum was at the repository-level, rather than at the
package level, it meant that we would refuse to build a pipeline
as soon as there were any changes at all to the repository, as we
could no longer guarantee the installed packages would be the same.
As of this patch, all repository and metadata handling is done by
composer, rather than osbuild. This means that the resolved metadata
can be cached between runs, which and it means that we can now
pin individual packages, rather than the entire repository. Meaning,
that as long as the rpms are still available, we are able to build
a pipeline.
The downloading of rpms is now done by a source helper in osbuild,
which means that they can be cached and shared between runs too.
One consequence of this change is that we resolve the location of
each rpm in composer, and pass that to the worker. As the worker
may not be in the same location, we do not want to use metalinks
in composer for this, as it would pin the repository closest to
composer, rather than the runner. Instead, we now manually select
a baseurl for each repository, which should be generally the
most useful one. Fedora helpfully provides such baseurls, so
this should work ok.
The most important thing to verify when checking this commit, is
that the image info in our test-cases remains unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This includes the packgase and build-packages used by each pipeline.
For now, this information is not used anywhere, but when we move
from dnf to rpm-based pipelines, this is what will be used instead
of the repo metadata checksum.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These tests are generated by regenerating each of the fedora-30 tests
with only the distro field changed to fedora-30.
```
for case in f30-*.json; do
cat $case | jq '.["compose-request"]' | jq '.distro = "fedora-31"' | sudo ./tools/generate-test-cases .osbuild | jq . | sponge f31-$case
done
``
Signed-off-by: Tom Gundersen <teg@jklm.no>
So far we only have f30, x86_64 images to be boot tested. In follow-ups
we expect to test all distros, all architectures and all image types
as boot tests.
And we also expect to do some sanity testing for all the blueprint
features we support without booting.
The AMI images can boot with an empty blueprint, the other image types
need an ssh key embedded in order to be able to connect and verify
that they booted successfully.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As in 616b6250c7, add the needed
ssh key to the formerly empty blueprint, and use this test-case
for booting as well as pipeline generation verification.
For the ext4-partition image type, we also needed to add the
openssh-server package, as @core is not included by default.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Add the needed ssh key to the formerly empty blueprint, and use
this test-case for booting as well as pipeline generation
verification.
This merges the qemu-backed tests, the nspawn ones will be done
in a follow-up as they require more work.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This sorts the keys in the test case, but there is no behavioral
change.
This is in preparation for the cases being generated.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These cases are pointing to internal repos that have since changed. Drop them
until we have a better long-term story.
Our CI currently does not verify these cases, so this is not a behavioural
change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A manifest is struct made up of a pipeline and a sources object. So
far all our sources objects are empty, but we have moved from
using pipelines to manifests everywhere, in preparation for
generating pipelines that require sources.
Make the same change in the test cases.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is to avoid any confusion with the Compose struct in the store,
which contains the pinned rpmmd data and the pipeline, among other
things.
The struct in the test cases represent the user input to the compose
route, so rename it 'compose-request', to make that clearer.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The intention is that the compose struct fully specifies the test
case, and pipeline and image-info specifies the expected outputs.
Lastly, the boot struct specifies how to boot-test the image.
The checksum does not fit into this scheme, as it is computed from
the compose by querying rpmmd, and it is then passed as an input to
distro.Pipeline in order to compute the pipeline.
Introduce a new struct, rpmmd, which will eventually contain all
the data returned from rpmmd.Depsolve and later passed to
distro.Pipeline. For now it only contains the checksum.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When creating a pipeline the assembler includes an image size. This
image size can be set when creating the pipeline but if it is 0 then a
default image size will be used. The default is 2 GB except for ami
images which are 6 GB.
During development of a new distro, we need to test composer against
nightly or beta repositories, but we cannot ship composer itself
with the nightly repository information hardcoded in. At the same
time, we want to distinguish between the system repositories of the
host and the repositories we use to generate images (the host may not
use the same distro/version/architecture as the target, and it may
include custom repositories that the target should not).
We therefore ship per distro repository information that can be
overriden (typically in testing) by dropping files in /etc.
For now use the latest nightlies for RHEL-8.2, we may want to
replace these with the official mirrors for GA eventually.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When group names are passed on to dnf, they must be prefixed with an
ampersand, or they are treated as a regular package, potentially
causing the build to fail.
Add a testcase to verify this behavior.
This resolves rhbz#1784035.
Signed-off-by: Tom Gundersen <teg@jklm.no>
On architectures that require EFI, we must create the ESP partition
and use a GPT partition table. We must also install either the UEFI
or the legacy version of GRUB2 in the image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move to the new options format, allowing more flexible partition
tables. The pipeline changes, but the result should be the same.
This requires a yet-to-be-released version of osbuild.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Allow bootloader specific packages to be defined per architecture,
and allow repositories to depend on the architecture.
This does not altert he pipelines we produce, part from the ami
image now contains the grub2-pc package, rather than the grub2
package. This should make no difference.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make sure we catch expected errors and fail gracefully. Also, make
sure the output id is printed on successufl osbuild run so it can
be introspected from outside the test suite.
Handle the case where osbuild suceeds but does not return an
output_id.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These are used to verify that our pipeline generation is stable, and
that the piplines can generate images that boot.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When we get an ssh connection, before the image is fully booted,
systemctl is-system-started returnse "starting", treat this as a
failed connection, and keep retrying.
Some distros may not support the wait switch correctly.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Instead of having a static repository checksum, set it dynamically from
the metadata that osbuild-composer last saw. This is implemented in
dnf-json, which returns the checksums for each repository on every call.
This enables the use of repositories that change over time, such as
fedora-updates. Note that the osbuild pipeline will break when such a
repository changes. This is intentional: pipelines have to be
reproducible.
Currently we still only build for x86_64, but now the test suite is
prepared for hooking up other architectures.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This way we can test the distros on their respective CI, as not
all distros can be built in all environment. In particular RHEL
needs to be on a subscribed host.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We can now select specific cases, but whether or not to check image-info
or boot the image is determined purely by the contents of the json test
case.
We still run the tests as two travis workers just to avoid the timeout,
this should clearly be reworked.
Signed-off-by: Tom Gundersen <teg@jklm.no>