debian-forge-composer/test
Tom Gundersen 4c40faebe6 distro: move from dnf-based to rpm-based pipelines for all distros
Conceptually, we used to insert the high-level packages and package
groups into the pipeline together with the expected repository
metadata checksum.

osbuild, using the dnf stage, would then fetch the metadata, verify
that its checksum is correct, compute the dependencies, and install
the packages.

Among the problems this has is that it made it impossible to cache
and share the resolved metadata as well as the rpms. Moreover,
as the checksum was at the repository-level, rather than at the
package level, it meant that we would refuse to build a pipeline
as soon as there were any changes at all to the repository, as we
could no longer guarantee the installed packages would be the same.

As of this patch, all repository and metadata handling is done by
composer, rather than osbuild. This means that the resolved metadata
can be cached between runs, which and it means that we can now
pin individual packages, rather than the entire repository. Meaning,
that as long as the rpms are still available, we are able to build
a pipeline.

The downloading of rpms is now done by a source helper in osbuild,
which means that they can be cached and shared between runs too.

One consequence of this change is that we resolve the location of
each rpm in composer, and pass that to the worker. As the worker
may not be in the same location, we do not want to use metalinks
in composer for this, as it would pin the repository closest to
composer, rather than the runner. Instead, we now manually select
a baseurl for each repository, which should be generally the
most useful one. Fedora helpfully provides such baseurls, so
this should work ok.

The most important thing to verify when checking this commit, is
that the image info in our test-cases remains unchanged.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-03-15 19:38:59 +01:00
..
cases distro: move from dnf-based to rpm-based pipelines for all distros 2020-03-15 19:38:59 +01:00
cloud-init test/run: pass cloud-init data to qemu 2019-12-10 02:47:35 +01:00
keyring Introduce local boot test case for QCOW2 2019-11-11 15:47:01 +01:00
f27-build-from-ubuntu1804.json pipeline: support osbuild runners 2019-11-27 02:47:36 +01:00
README.md Add documentation around assert/require. Refs #309 2020-03-13 14:01:18 +01:00
run tests: fail the tests if no manifest is given 2020-03-10 14:40:52 +01:00

osbuild-composer testing information

Test binaries, regardless of their scope/type (e.g. unit, API, integration) must follow the syntax of the Go testing package, that is implement only TestXxx functions with their setup/teardown when necessary in a yyy_test.go file.

Test scenario discovery, execution and reporting will be handled by go test.

Some test files will be executed directly by go test during rpm build time and/or in CI. These are usually unit tests. Scenarios which require more complex setup, e.g. a running osbuild-composer are not intented to be executed directly by go test at build time. Instead they are intended to be executed as stand-alone test binaries on a clean system which has been configured in advance (because this is easier/more feasible). These stand-alone test binaries are also compiled via go test -c -o during rpm build or via make build. See Integration testing for more information.

Notes on asserts and comparing expected values

When comparing for expected values in test functions you should use the testify/assert or testify/require packages. Both of them provide an impressive array of assertions with the possibility to use formatted strings as error messages. For example:

assert.Nilf(t, err, "Failed to set up temporary repository: %v", err)

If you want to fail immediately, not doing any more of the asserts use the require package instead of the assert package, otherwise you'll end up with panics and nil pointer memory problems.

Stand-alone test binaries also have the -test.failfast option.

Integration testing

This will consume the osbuild-composer API surface via the composer-cli command line interface. Implementation is under cmd/osbuild-tests/.

The easiest way to get started with integration testing from a git checkout is:

  • dnf -y install rpm-build
  • dnf -y builddep golang-github-osbuild-composer.spec
  • make rpm to build the software under test
  • dnf install output/x86_64/golang-github-osbuild-composer-*.rpm - this will install both osbuild-composer, its -debuginfo, -debugsource and -tests packages
  • systemctl start osbuild-composer
  • /usr/libexec/tests/osbuild-composer/osbuild-tests to execute the test suite. It is best that you use a fresh system for installing and running the tests!

NOTE:

The easiest way to start osbuild-composer is via systemd because it takes care of setting up the UNIX socket for the API server.

If you are working on a pull request that adds more integration tests (without modifying osbuild-composer itself) then you can execute the test suite from the local directory without installing it:

  • make build - will build everything under cmd/
  • ./osbuild-tests - will execute the freshly built integration test suite