We do not properly test, and do not have properly defined use-cases for
the ext4-filesystem, partitioned-disk, nor tar image types. Drop them to
focus on delivering the things we car properly test.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As it turns out, the default expectation is not to distinguish between
these. We will now produce whatever is the most recent minor release by
default, and image tests will still be pinned at a given snapshot to be
reproducible.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The image tests are now able to upload images to Azure and boot them there.
However, the documentation on how to set up the required resources in Azure
was missing. This commit adds it.
osbuild removed GRUB2_ROOT_FS_UUID from grubenv in 22d131a5. This broke
the Jenkins CI because it runs against osbuild master.
This commit fixes all the testcases and bumps the osbuild submodule
to version 13 that changed the GRUB2_ROOT_FS_UUID behaviour.
Previously, vhd images were tested using QEMU. This commit changes that to
boot them in the actual Azure infrastructure.
Azure VMs have quite a lot of dependencies - a network interface, a virtual
network, a network security group, a public ip address and a disk. Azure CLI
and Azure Portal handle the creation of all these resources internally.
However, when using the API, the caller is responsible to create all these
resources before creating an actual VM.
To handle the creation of all the resources in the right order, a deployment
is used. A deployment is a set of resources defined in a JSON document.
It can optionally take parameters to customize each deployment. After the
deployment is finished, the VM is up and ready to be tested using SSH.
Sadly, the deployments are a bit hard to clean-up. One would expect that
deleting a deployment removes all the deployed resources. However, it doesn't
work this way and therefore it's needed to clean up all resources "manually".
For this reason, our deployment sets a unique tag on all the resources created
by the deployment. After this test is finished, the API is queried for all
the resources with the tag and then, they're deleted in the right order.
Requests to download.fedoraproject.org are redirected to a regional
mirror that is fairly close to the system that is making the request.
This could speed up downloads slightly since it avoids using the
Fedora proxies.
Signed-off-by: Major Hayden <major@redhat.com>
Prior this commit the ami image type produced raw.xz images. This was bad for
two reasons:
- The upload was broken because AWS doesn't support tar.xz format
- XZ compression is terribly slow
This commit changes the format to vhdx, which is supported by AWS and also
quite quick. See https://github.com/osbuild/osbuild-composer/issues/257
why vhdx was chosen.
Fixes#257
The test generation script we use outputs the test cases prefixed with
"fedora_$RELEASE" so I renamed all the current test cases to follow this
convention and also changed the travis job names so that it can run both
x86_64 test cases and aarch64 test cases.
There is no point in having the grub2 stage in pipelines for image types
that are not bootable. The current version is probably a result of
previous refactoring where the member variable was named `IncludeFSTab`.
Moving the grub2 stage into the conditional branch should also fix test
generation on aarch64.
Finally it is necessary to regenerate test cases for non-bootable image
types.
Prior this commit the ami images were tested locally using qemu. This does
not reflect at all how they're used in practice. This commit introduces
the support for running them in the actual AWS. Yay!
The structure of code reflects that we want to switch to osbuild-composer
to build the images soon.
Dracut is unfortunately very host-dependant by default. The package
dracut-config-generic forces it use a generic configuration instead of a
configuration generated from the host environment.
This change should make the image generation more reproducible. For example
it was not possible to boot ami images built on Travis on AWS prior this
commit.
Also, the tests were re-generated in this commit.
coveralls doesn't work from GitHub actions. Its "github" service type
uses the GITHUB_TOKEN from the action, which only has read access when
invoked from a forked repository.
codecov gets this right: it validates that an uploaded coverage file
originated from a GitHub action run by asking GitHub, and then uses its
OAuth credentials (through the Marketplace App) to comment and set
status.
Also, coveralls requires a third-party package to convert go's coverage
report to a format it understands. codecov detects the format
server-side. It also handles go's coverage format better: it highlights
lines with "some coverage" in yellow (coveralls has no concept of this).
Rather than use the round-robin mirror, use a fixe stable one.
We should not over-use this, but it seems the round-robin one is
simply too unstable for our tests.
Eventually, we want to use metalink here instead, but we need to
rework the file source first, so we can do that in a way that
does not make the tests dependent on the time/place the test
case was generated (currently the URL to a specific mirror ends
up pinned in the test-case).
Signed-off-by: Tom Gundersen <teg@jklm.no>
This means that the unit tests no longer need to load the
repositories from the git repo, and in a follow-up, osbuild-composer
won't need to either.
By splitting the repositories used for testing from the system
repositories available through the weldr API we are able to extend
the system repositories without affecting the reproducibility of
the tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It is no longer necesary to install yum, nor use a build environment
even though we are running this in ubuntu VMs. The rpm stage needed
by the build pipeline works just fine on stock ubuntu.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Conceptually, we used to insert the high-level packages and package
groups into the pipeline together with the expected repository
metadata checksum.
osbuild, using the dnf stage, would then fetch the metadata, verify
that its checksum is correct, compute the dependencies, and install
the packages.
Among the problems this has is that it made it impossible to cache
and share the resolved metadata as well as the rpms. Moreover,
as the checksum was at the repository-level, rather than at the
package level, it meant that we would refuse to build a pipeline
as soon as there were any changes at all to the repository, as we
could no longer guarantee the installed packages would be the same.
As of this patch, all repository and metadata handling is done by
composer, rather than osbuild. This means that the resolved metadata
can be cached between runs, which and it means that we can now
pin individual packages, rather than the entire repository. Meaning,
that as long as the rpms are still available, we are able to build
a pipeline.
The downloading of rpms is now done by a source helper in osbuild,
which means that they can be cached and shared between runs too.
One consequence of this change is that we resolve the location of
each rpm in composer, and pass that to the worker. As the worker
may not be in the same location, we do not want to use metalinks
in composer for this, as it would pin the repository closest to
composer, rather than the runner. Instead, we now manually select
a baseurl for each repository, which should be generally the
most useful one. Fedora helpfully provides such baseurls, so
this should work ok.
The most important thing to verify when checking this commit, is
that the image info in our test-cases remains unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This includes the packgase and build-packages used by each pipeline.
For now, this information is not used anywhere, but when we move
from dnf to rpm-based pipelines, this is what will be used instead
of the repo metadata checksum.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These tests are generated by regenerating each of the fedora-30 tests
with only the distro field changed to fedora-30.
```
for case in f30-*.json; do
cat $case | jq '.["compose-request"]' | jq '.distro = "fedora-31"' | sudo ./tools/generate-test-cases .osbuild | jq . | sponge f31-$case
done
``
Signed-off-by: Tom Gundersen <teg@jklm.no>
So far we only have f30, x86_64 images to be boot tested. In follow-ups
we expect to test all distros, all architectures and all image types
as boot tests.
And we also expect to do some sanity testing for all the blueprint
features we support without booting.
The AMI images can boot with an empty blueprint, the other image types
need an ssh key embedded in order to be able to connect and verify
that they booted successfully.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As in 616b6250c7, add the needed
ssh key to the formerly empty blueprint, and use this test-case
for booting as well as pipeline generation verification.
For the ext4-partition image type, we also needed to add the
openssh-server package, as @core is not included by default.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Add the needed ssh key to the formerly empty blueprint, and use
this test-case for booting as well as pipeline generation
verification.
This merges the qemu-backed tests, the nspawn ones will be done
in a follow-up as they require more work.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This sorts the keys in the test case, but there is no behavioral
change.
This is in preparation for the cases being generated.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These cases are pointing to internal repos that have since changed. Drop them
until we have a better long-term story.
Our CI currently does not verify these cases, so this is not a behavioural
change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A manifest is struct made up of a pipeline and a sources object. So
far all our sources objects are empty, but we have moved from
using pipelines to manifests everywhere, in preparation for
generating pipelines that require sources.
Make the same change in the test cases.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is to avoid any confusion with the Compose struct in the store,
which contains the pinned rpmmd data and the pipeline, among other
things.
The struct in the test cases represent the user input to the compose
route, so rename it 'compose-request', to make that clearer.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The intention is that the compose struct fully specifies the test
case, and pipeline and image-info specifies the expected outputs.
Lastly, the boot struct specifies how to boot-test the image.
The checksum does not fit into this scheme, as it is computed from
the compose by querying rpmmd, and it is then passed as an input to
distro.Pipeline in order to compute the pipeline.
Introduce a new struct, rpmmd, which will eventually contain all
the data returned from rpmmd.Depsolve and later passed to
distro.Pipeline. For now it only contains the checksum.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When creating a pipeline the assembler includes an image size. This
image size can be set when creating the pipeline but if it is 0 then a
default image size will be used. The default is 2 GB except for ami
images which are 6 GB.