This sorts the keys in the test case, but there is no behavioral
change.
This is in preparation for the cases being generated.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These cases are pointing to internal repos that have since changed. Drop them
until we have a better long-term story.
Our CI currently does not verify these cases, so this is not a behavioural
change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The test case json files will increase in complexity with the move from
dnf to json. They quantity of them will also continue to grow as new
distros, architectures, boot methods, image types, and blueprint
customizations become available. The generate-test-cases script
simplifies the process of creating new test cases. It accepts a compose
request and boot method as input and then uses osbuild-pipeline,
osbuild, and image-info to generate the test case.
[tomegun: some clean-ups and allow store to be reused]
Make the bluprint parameter a bool, and if set, then read a
blueprint from stdin, otherwise an empty blueprint is used.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A ComposeRequest is data used to submit a compose to the store, so it
should live in that package.
Remove the json marshalling test, because ComposeRequest is never
marshalled to JSON.
This will allow to use types from `distro` in the ComposeRequest struct.
The response is different for JSON and TOML requests. If it is JSON it
will always return a 200, but any blueprints with errors will be in the
errors list.
If TOML has an error it will return an error 400 with the error in a
standard API error response with status set to false.
The JSON and TOML parsers differ in how they handle an empty body so
check for a ContentLength of zero first and return a "Missing
blueprint" error to the client.
Includes updated tests for the JSON path, and new tests for empty TOML
blueprints.
If the blueprint doesn't exist, or the commit for the selected blueprint
doesn't exist it will return an error.
This also fixes the blueprints/undo/ route to return the correct error
to the caller.
If an unknown blueprint or workspace is deleted it will now return an
error.
Also fixes the blueprints DELETE handlers to return the correct error to
the client. Includes a new test.
These are done through github actions, which are much quicker. Leave
the image tests until they will be moved over to proper integration
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
There should be no need to run unit tests on specific architectures,
move it over to github-actions and rename "Lint" to "Checks" as it
is a bit more generic now.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This allows us to take advantage of the `testing` package. It also gives
the resulting test binary common command line arguments (same as `go
test`).
Tests need to be compiled with `go test -c`, which injects a `Main()`
that calls the Test* functions.
This is not supported by the golang rpm macros. Thus, build this binary
by calling `go test -c` directly, but taking care to pass the same
linker flags as the `%gobuild` macro.
Mark the test binary with the `integration` build constraint, so that
`go test ./...` doesn't pick them up. That's only for unit tests.
The idea is to move all other test binaries to this scheme as well.
Spec file changes by Lars Karlitski <lars@karlitski.net>
A developer may want to use the output of rpmmd (build package specs,
package specs, and checksums) instead of the pipeline manifest. In this
case they may pass the -rpmmd flag to osbuild-pipeline. With this flag,
instead of returning the pipeline, it will return the output of rpmmd.
Previously the order that changes were made to blueprints was not being
saved. I worked around this by sorting by timestamp, but it only has 1s
resolution so it is very likely to end up with changes having the same
timestamp, especially when running tests.
This adds a new variable to the Store, it is a list of the commit hashes
for each blueprint, in the order they were made.
Since this is a change to the Store schema the first time the new code
is run with the old store state it needs to populate the commit list, as
best it can, with the existing data. To do that it sorts the changes for
each blueprint by timestamp and version and saves this ordering into the
new BlueprintsCommits list.
A POST to this route will tag the latest commit of a blueprint as a new
revision. The revision numbers start at 1 and increment on each call.
If the latest commit has already been tagged it ignores the request.
The purpose of this documentation is to describe how the user is
expected to work with the RCM API. It can also serve as an example for
creating automation scripts if the RCM teams wants to create some.
This converts the client and weldrcheck functions to use http.Client for
connections instead of passing around the socket path and opening it for
each test.
It is an equivalent to what we already have for Weldr API but this one
is for the RCM API. It should test the expected use cases:
* submit a compose
* get a status
A manifest is struct made up of a pipeline and a sources object. So
far all our sources objects are empty, but we have moved from
using pipelines to manifests everywhere, in preparation for
generating pipelines that require sources.
Make the same change in the test cases.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is to avoid any confusion with the Compose struct in the store,
which contains the pinned rpmmd data and the pipeline, among other
things.
The struct in the test cases represent the user input to the compose
route, so rename it 'compose-request', to make that clearer.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is unused for now, but we support passing the actual package specs
return from rpmmd.Depsolve() to distro.Pipeline(), so we should support
doing that also in the tests.
So far all our distros ignore the passed in packages, so no test-cases
make use of this yet.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The intention is that the compose struct fully specifies the test
case, and pipeline and image-info specifies the expected outputs.
Lastly, the boot struct specifies how to boot-test the image.
The checksum does not fit into this scheme, as it is computed from
the compose by querying rpmmd, and it is then passed as an input to
distro.Pipeline in order to compute the pipeline.
Introduce a new struct, rpmmd, which will eventually contain all
the data returned from rpmmd.Depsolve and later passed to
distro.Pipeline. For now it only contains the checksum.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Some `composer-cli` commands operate on the current directory, for
example saving blueprints or images. Add the `TemporaryWorkDir` type,
which helps changing to a temporary working directory and cleaning
everything up after.
The alternative would have been to pass a working directory to all calls
of `runComposerCLI`, but that seemed like it would require a lot of
change, which I didn't deem worth for testing code.
The spec file in the current working directory might have changes. When
building rpms with the commit hash in the version, they ought to be
built with the spec file from that hash as well.