The tests works by executing osbuild with predefined pipeline. Then the
image boots and the testing script creates SSH connection to the running
VM. If everything goes fine `systemctl is-system-running` is executed
with result `running` and the test case passed.
The JSON definition of the test case contains also a blueprint that
should generate the desired pipeline, but it didn't work for me, so I'm
including it for future use from the golang unit tests.
The delete source handler now removes the leading "/" from the
parameters passed to it. Not removing the "/" caused sources to not
be deleted from the store since they could not be found when their name
contained a "/" as the first character.
lorax-composer uses the host's repositories for building images. This is
prone to accidental configuration errors and duplicates functionality
(adding custom repositories via the source API is much more explicit).
However, blueprints don't specify the distribution they're based on.
This is something they should do in the future to enable cross-distro
image builds. Until then, read `/etc/os-release` to detect the host OS
and use that as the base distro for a blueprint.
Detection works by concatenating the ID field with a `-` and the
VERSION_ID field. This mandates how additional distributions should
register themselves to the `distro` package.
If composer cannot build the detected distro, fall back to fedora-30.
Use this detection everywhere that fedora-30 was hard-coded before.
Introduce the `distro` package, which contains an interface for OS
implementations. Its main purpose is to convert a blueprint to a
distro-specific pipeline.
Also introduce the `distro/fedora30` package. It is the first
implementation of the distro interface. Most of its code has been copied
with minimal modifications from the blueprint package.
The `blueprint` package is now back to serving a single purpose:
representing a weldr blueprint. It does not depend on the `pipeline`
package anymore.
Change osbuild-composer and osbuild-pipeline to use the new API,
hard-coding "fedora-30". This looks a bit weird now, but is the same
behavior as before.
All test cases now also take an "distro" key in the "compose" object.
Add tests for adding and deleting user defined sources. Update existing
tests for source info. The source being added is the fish repository.
This is the same source that the cockpit-composer tests attempt to add.
The api now supports user defined sources alongside the existing base
repository. The routes /projects/source/new and
/projects/source/delete/<name> have been added.
/projects/source/info/<names> and /projects/source/list have been
updated.
The SourceConfig type is now called from the store instead of being
declared in the api.
The api is initialized with a base repo but users may want to add their
own custom repos. Based on lorax's implementation, the store now
supports sources. The source object is similar to but not the same as
the repos used by dnf. The store can now add, delete, and get info about
sources.
Make each command accept a `repos` key containing repository
descriptions.
Make weldr API pass the repository like this. Nothing should change,
because the repos were the same (Fedora 30).
There was missing condition for the case when compose push fails.
As the previous commit introduces customizations which can return even more
errors it is important to report such errors.
For now the API is returning only 500 HTTP code. In future we should
distinguish between failures and blueprint validation errors.
Newer gcc doesn't really like strncpy producing following warnings:
warning: ‘strncpy’ specified bound depends on the length of the source argument
I switched the function to the plain old strcpy as strncpy is not necessary anyway
because we allocate the destination buffer using size returned from strlen.
Compose tests ignore the id and timestamps and verify that the rest
of the response is as expected. The testRoute function now accepts
fields to ignore and uses dropFields to remove them from the test and
response objects.
Certain fields (timestamps, uuids, etc.) are difficult to test for. The
dropFields function allows specific fields to be removed from an
unmarshalled json response body.
These tests (will) test more than just image-info: they'll take a
blueprint, verify that `osbuild-pipeline` generates the correct
pipeline, run osbuild with that pipeline and verify that the resulting
image has the expected image-info output.
This change only includes the latter half (i.e., only moves the already
existing tests).
Also drop python's unittest. It was hard to control output (important
for quickly spotting failures and to make travis happy). This introduces
test/run, which runs all test cases in test/cases or the ones given on
the command line.
When a failure occurs, it prints a diff of the actual and the expected
image info.
The main purpose of this is to share the structs between the server
and the client, and let the compiler ensure that our marshaling and
unmarshaling matches.
In the future we also want to make it easier to write unittests for
this code.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The main difference (according to image-info) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
Also, lorax generates an image with a separate /boot partition. This is
not yet addressed here, because osbuild doesn't support it yet.
The main difference (according to `rpm -qa`) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
The main difference (according to image-info) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
Also, lorax generates an image with a separate /boot partition. This is
not yet addressed here, because osbuild doesn't support it yet.
The main difference (according to image-info) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
This is the output of disk info ran against the images produced
by the specified pipelines.
Skip the actual test for now, because it is taking too long to run.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A test case is now expressed purely by dropping in a json document in
`tools/test_image_info/pipelines`. It should contain the input compose
(output format and blueprint) as well as the expected pipeline. All the
existing tests are moved over to this format.
This shares the same infrastructure as the image tests, ideally we want
to run the blueprint tests and the image tests against the same pipelines.
For now, test cases are skipped from the blueprint tests if they do not
contain a 'compose' section, and from the image tests if they do not
cotain an 'expected' section. In the future we may want to make both
mandatory.
Signed-off-by: Tom Gundersen <teg@jklm.no>
All composes now include the image size in their status response. The
image size will be 0 except for finished composes where it will be the
file length in bytes.
Alongside File, Name, and Mime type, each Image in the store will
contain the image size. This will default to 0 unless the compose's
status is FINISHED. Then, it will be the length in bytes of the file.
This can serve as a starting point, but it shows there are a few
problems to solve: we need to verify json that depends on the setup,
in particular, the json the queue contains will contain UUID's that
are generated out of our control.
Moreover, the setup for this test only makes sense for internal test,
so I think we may want to change the logic for whether or not a test
sholud be supported to be run externally to be per test-function,
rather than per call to sendHTTP().
Signed-off-by: Tom Gundersen <teg@jklm.no>
This does not change the behavior, but refactors according to these principles:
1) No two routes are tested in the same function (but it would be ok to split
tests for one route over several funcions)
2) At most one testRoute() call is made per API object, and the state is
completely set up and tore down between tets.
On top of this we should add more test cases to each of the tables, but
I'm leaving this to future PRs.
Signed-off-by: Tom Gundersen <teg@jklm.no>