Newer gcc doesn't really like strncpy producing following warnings:
warning: ‘strncpy’ specified bound depends on the length of the source argument
I switched the function to the plain old strcpy as strncpy is not necessary anyway
because we allocate the destination buffer using size returned from strlen.
Compose tests ignore the id and timestamps and verify that the rest
of the response is as expected. The testRoute function now accepts
fields to ignore and uses dropFields to remove them from the test and
response objects.
Certain fields (timestamps, uuids, etc.) are difficult to test for. The
dropFields function allows specific fields to be removed from an
unmarshalled json response body.
These tests (will) test more than just image-info: they'll take a
blueprint, verify that `osbuild-pipeline` generates the correct
pipeline, run osbuild with that pipeline and verify that the resulting
image has the expected image-info output.
This change only includes the latter half (i.e., only moves the already
existing tests).
Also drop python's unittest. It was hard to control output (important
for quickly spotting failures and to make travis happy). This introduces
test/run, which runs all test cases in test/cases or the ones given on
the command line.
When a failure occurs, it prints a diff of the actual and the expected
image info.
The main purpose of this is to share the structs between the server
and the client, and let the compiler ensure that our marshaling and
unmarshaling matches.
In the future we also want to make it easier to write unittests for
this code.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The main difference (according to image-info) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
Also, lorax generates an image with a separate /boot partition. This is
not yet addressed here, because osbuild doesn't support it yet.
The main difference (according to `rpm -qa`) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
The main difference (according to image-info) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
Also, lorax generates an image with a separate /boot partition. This is
not yet addressed here, because osbuild doesn't support it yet.
The main difference (according to image-info) is an additional package
containing a gpg key which was used to verify packages. The one
generated by lorax-composer doesn't have this, because it doesn't verify
signatures.
This is the output of disk info ran against the images produced
by the specified pipelines.
Skip the actual test for now, because it is taking too long to run.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A test case is now expressed purely by dropping in a json document in
`tools/test_image_info/pipelines`. It should contain the input compose
(output format and blueprint) as well as the expected pipeline. All the
existing tests are moved over to this format.
This shares the same infrastructure as the image tests, ideally we want
to run the blueprint tests and the image tests against the same pipelines.
For now, test cases are skipped from the blueprint tests if they do not
contain a 'compose' section, and from the image tests if they do not
cotain an 'expected' section. In the future we may want to make both
mandatory.
Signed-off-by: Tom Gundersen <teg@jklm.no>
All composes now include the image size in their status response. The
image size will be 0 except for finished composes where it will be the
file length in bytes.
Alongside File, Name, and Mime type, each Image in the store will
contain the image size. This will default to 0 unless the compose's
status is FINISHED. Then, it will be the length in bytes of the file.
This can serve as a starting point, but it shows there are a few
problems to solve: we need to verify json that depends on the setup,
in particular, the json the queue contains will contain UUID's that
are generated out of our control.
Moreover, the setup for this test only makes sense for internal test,
so I think we may want to change the logic for whether or not a test
sholud be supported to be run externally to be per test-function,
rather than per call to sendHTTP().
Signed-off-by: Tom Gundersen <teg@jklm.no>
This does not change the behavior, but refactors according to these principles:
1) No two routes are tested in the same function (but it would be ok to split
tests for one route over several funcions)
2) At most one testRoute() call is made per API object, and the state is
completely set up and tore down between tets.
On top of this we should add more test cases to each of the tables, but
I'm leaving this to future PRs.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This simplifies the code a bit, and will be used in follow-up patches
to distinguish between setup calls and explicit tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A job is now in "WAITING" state exactly when it is in the channel,
once it is popped it enters "RUNNING" state. It is only possible
to update the state of a job that is in the running state.
This mean that updating to "RUNNING" is entirely optional, but in
the future we may want to use this as a watchdog logic, and require
the worker to update at regular intervals to avoid being restarted.
The job queue API is updated to require a POST followed by one
or several PATCH messages to the returned ID. If a patch is sent
to an ID before the POST it is as if the object does not exist
(regarldess of it being in the queue in WAITING state or not).
Once a job has been POSTed it can be PATCHed to update it zero or
more times with (still) RUNNING before exactly oncee with either
FINISHED or FAILED.
Signed-off-by: Tom Gundersen <teg@jklm.no>
List of currently unsupported ones:
- [[repos.git]]
- [customizations.kernel]
- [[groups]]
- [[packages]] and [[modules]]
Some of customizations have unimplemented behaviour, see TODOs
Not all tests will be compatible with external APIs such as Lorax. When
calling testRoute each test now declares if it can run against an
external API or not. This change allows us to test against Lorax but
skip the cases that will be invalid when not run against
osbuild-composer's API.
In order to maintain parity with lorax the api needs to reply with an
error message equivalent to that used by lorax. Error messages are now
returned inside an error object that contains an id, message, and
optional status code.
For some routes, there are errors when no url parameters are passed. The
httprouter was using named parameters of the form /:param which does not
match for empty parameters. Now, it has been updated to use the
catch-all parameters of the form /*param. This change allows the case of
no parameters. However, parameters will now include a "/" as
their first character. This needs to be removed from the string in the
route handler.
In order to provide the proper error message for
/modules/list/<modules>, searching for the modules needed to be updated.
The requested modules and known packages are iterated over and if there
is a match the module is added to the response. Also, the found module
is dropped from the list of requested modules. If this list is not empty
after searching all of the modules then an error is returned containing
the name of the non-existant module.
Use the autogenerated test framework from VSCode (with minor fixes to
make them compile).
As we do not yet support customizations in blueprint, all the tests are
trivial.
I was not entirely sure about the best way to encode the wanted pipeline
output. I currently represent it as a JSON string and unmarshal it into
the object to compare with.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When osbuild-composer is run as systemd service, we don't want to write
anything into working directory. Currently, we write dnf cache into it.
Instead, let's just use the default dnf cache directory.
log.Fatalf not only writes into a log, but also exits the process.
Instead, we want the caller to handle the error.
Also, the logged message is imho wrong.