The distro argument and restrictions to run only tests for the same distro
as the host's one were confusing. This commit removes them. Now the behaviour
is following:
By default all the test cases in test case directory are run.
If test cases are given by arguments, they are all run, and test case
directory is ignored.
We're currently rewriting all the integration tests to use the Go
testing framework. This commit does the switch for the image tests.
I decided not to use the testing framework in functions which are
not directly tight to testing (booting images, running osbuild). I think
it's reasonable to use classic error handling there and propagate the errors
to places directly tight to testing and use the testing library.
This enables us to reuse the code in different part of projects if needed.
There's now nothing preventing us from using the newer and better Go
implementation of image tests.
Also, the travis config was simplified to be more readable.
s390x uses LXC, we need to be run in a real VM for the osbuild tests
to work.
We did not yet have s390x support anyway, so these were noops.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It is no longer necesary to install yum, nor use a build environment
even though we are running this in ubuntu VMs. The rpm stage needed
by the build pipeline works just fine on stock ubuntu.
Signed-off-by: Tom Gundersen <teg@jklm.no>
No longer distinguish between the tests appart from their distro and
architecture from travis' point of view.
We must distinguish based on architecture, as we do not yet support
cross-architecture builds. We also split by distro as there is no
benefit to running tests for different distros on the same VM, as
they will not be able to share any chaches, so we might as well
parallellize them.
Tests that apply to the same distro/architecture combo are now
always run on the same VM as to utilize any caching we are able
to do.
Now that local_boot and empty_blueprint tests have been merged,
this will not increase the number of tests currently run on a
given VM, so should not affect the total running time of the
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These are done through github actions, which are much quicker. Leave
the image tests until they will be moved over to proper integration
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A script that runs various go tools (mod tidy, mod vendor, and fmt for
now).
The idea is that it prepares the source to be ready for master. As such,
running it on master shouldn't modify any files. Make sure of that by
adding a test.
These seemed not to get triggered (no available workers?), so disable
for now. We do want PPC support, but it is currently our last
priority.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Currently we still only build for x86_64, but now the test suite is
prepared for hooking up other architectures.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We can now select specific cases, but whether or not to check image-info
or boot the image is determined purely by the contents of the json test
case.
We still run the tests as two travis workers just to avoid the timeout,
this should clearly be reworked.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This passes the redhat user with ssh key as an ISO image to our
qemu instances, making sure images relying on cloud-init rather than
hardcoded user credentials can be used in our tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
osbuild has a concept of runners now: scripts that set up a build
environment. Update the osbuild submodule to latest master, change
`Pipeline` to to the new buildroot description format, and use the
`org.osbuild.fedora30` runner from the fedora30 distro.
1) additional qemu tests for ami, vmdk, vhd, and openstack image types
2) new type of systemd-nspawn tests for tar, ext4, and parititioned disk
types
the systemd-nspawn tests use loopback network interface directly from
the host so it is necessary to tweak the settings of its SSH server.
This is done in a "script" stage using simple "sed" command.
The tests works by executing osbuild with predefined pipeline. Then the
image boots and the testing script creates SSH connection to the running
VM. If everything goes fine `systemctl is-system-running` is executed
with result `running` and the test case passed.
The JSON definition of the test case contains also a blueprint that
should generate the desired pipeline, but it didn't work for me, so I'm
including it for future use from the golang unit tests.
These tests (will) test more than just image-info: they'll take a
blueprint, verify that `osbuild-pipeline` generates the correct
pipeline, run osbuild with that pipeline and verify that the resulting
image has the expected image-info output.
This change only includes the latter half (i.e., only moves the already
existing tests).
Also drop python's unittest. It was hard to control output (important
for quickly spotting failures and to make travis happy). This introduces
test/run, which runs all test cases in test/cases or the ones given on
the command line.
When a failure occurs, it prints a diff of the actual and the expected
image info.
Add two kinds of tests, with one case each:
1. Run image-info against an osbuild pipeline. Uses osbuild
from a submodule to make an image from the pipeline.
2. Run image-info against an existing image, fetched from the internet.