This allows us to take advantage of the `testing` package. It also gives
the resulting test binary common command line arguments (same as `go
test`).
Tests need to be compiled with `go test -c`, which injects a `Main()`
that calls the Test* functions.
This is not supported by the golang rpm macros. Thus, build this binary
by calling `go test -c` directly, but taking care to pass the same
linker flags as the `%gobuild` macro.
Mark the test binary with the `integration` build constraint, so that
`go test ./...` doesn't pick them up. That's only for unit tests.
The idea is to move all other test binaries to this scheme as well.
Spec file changes by Lars Karlitski <lars@karlitski.net>
A developer may want to use the output of rpmmd (build package specs,
package specs, and checksums) instead of the pipeline manifest. In this
case they may pass the -rpmmd flag to osbuild-pipeline. With this flag,
instead of returning the pipeline, it will return the output of rpmmd.
Previously the order that changes were made to blueprints was not being
saved. I worked around this by sorting by timestamp, but it only has 1s
resolution so it is very likely to end up with changes having the same
timestamp, especially when running tests.
This adds a new variable to the Store, it is a list of the commit hashes
for each blueprint, in the order they were made.
Since this is a change to the Store schema the first time the new code
is run with the old store state it needs to populate the commit list, as
best it can, with the existing data. To do that it sorts the changes for
each blueprint by timestamp and version and saves this ordering into the
new BlueprintsCommits list.
A POST to this route will tag the latest commit of a blueprint as a new
revision. The revision numbers start at 1 and increment on each call.
If the latest commit has already been tagged it ignores the request.
The purpose of this documentation is to describe how the user is
expected to work with the RCM API. It can also serve as an example for
creating automation scripts if the RCM teams wants to create some.
This converts the client and weldrcheck functions to use http.Client for
connections instead of passing around the socket path and opening it for
each test.
It is an equivalent to what we already have for Weldr API but this one
is for the RCM API. It should test the expected use cases:
* submit a compose
* get a status
A manifest is struct made up of a pipeline and a sources object. So
far all our sources objects are empty, but we have moved from
using pipelines to manifests everywhere, in preparation for
generating pipelines that require sources.
Make the same change in the test cases.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is to avoid any confusion with the Compose struct in the store,
which contains the pinned rpmmd data and the pipeline, among other
things.
The struct in the test cases represent the user input to the compose
route, so rename it 'compose-request', to make that clearer.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is unused for now, but we support passing the actual package specs
return from rpmmd.Depsolve() to distro.Pipeline(), so we should support
doing that also in the tests.
So far all our distros ignore the passed in packages, so no test-cases
make use of this yet.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The intention is that the compose struct fully specifies the test
case, and pipeline and image-info specifies the expected outputs.
Lastly, the boot struct specifies how to boot-test the image.
The checksum does not fit into this scheme, as it is computed from
the compose by querying rpmmd, and it is then passed as an input to
distro.Pipeline in order to compute the pipeline.
Introduce a new struct, rpmmd, which will eventually contain all
the data returned from rpmmd.Depsolve and later passed to
distro.Pipeline. For now it only contains the checksum.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Some `composer-cli` commands operate on the current directory, for
example saving blueprints or images. Add the `TemporaryWorkDir` type,
which helps changing to a temporary working directory and cleaning
everything up after.
The alternative would have been to pass a working directory to all calls
of `runComposerCLI`, but that seemed like it would require a lot of
change, which I didn't deem worth for testing code.
The spec file in the current working directory might have changes. When
building rpms with the commit hash in the version, they ought to be
built with the spec file from that hash as well.
Since the recent commit (04db4fa) we are no longer provider
of lorax-composer. This was needed because when installing cockpit-composer,
which depends on lorax-composer, dnf chose us as the lorax-composer provider
instead of the original lorax-composer package. As we are not yet ready to
fully replace lorax-composer, this broke the cockpit-composer tests.
This commit is the first part of long term fix. Osbuild-composer is now
a provider of weldr. Soon, lorax-composer will also be a provider of
weldr. After that, cockpit-composer will be switched to be depending
on weldr. Also, it will suggest lorax-composer, which will force dnf
to use lorax-composer as the default weldr backend. If someone wants
to experiment with osbuild-composer, it will be possible, because
it will also be a provider of weldr and there will be no need to
install lorax-composer to satisfy cockpit-composer dependencies.
We're not fully compatible with lorax-composer's API yet, and we don't
provide lorax-composer's systemd unit files.
As a result, cockpit-composer's integration tests fail, because `dnf
install lorax-composer` on Fedora can result in installing
osbuild-composer in some cases.
Prior to this patch, `make rpm` would produce rpms that have the latest
tag as their versions. This was confusing, because one could never know
which contents are in a locally built rpm.
Change this so that the is version always based on the commit hash of
HEAD. This is easy: the golang macros read a `%commit` macro when it
exists and do this for us.
To simplify more, only define `%_topdir` to ./rpmbuild and use
rpmbuild's known directory structure (SPEC, SOURCES, RPMS, ...)
otherwise, to make it easier to find build results.
Build the specfile, tarball, source rpms, and rpms with `make rpm`,
without separate sub-targets. We can reintroduce them if they're needed
somewhere.
Also remove the `check-working-directory` target. It should be clear
from the output that only the currently-committed files are included,
because the resulting tarball and rpms contain the commit hash. Without
the check, one can work on the Makefile without having to commit all the
time, for example ;)
If the user creates a new blueprint with no version specified, the
blueprint struct uses "0.0.0" as the default version. Blueprint tests
for a blueprint with an empty version now expect no error.
This is not a behavioral change, as all distros currently use
empty source objects. But when we move over to rpm-based pipelines,
this will change.
Make the same change to osbuild-pipeline, so these stay in sync.
Signed-off-by: Tom Gundersen <teg@jklm.no>
For now, this simply wraps Pipeline and Sources, and retruns the
resulting manifest object. In the future, Pipeline and Sources
may be dropped from the interface.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A manifest is simply a struct containing a sources and a pipeline
object. We want to store and transfer pielines always with their
sources, and will use the manifest for this.
When serialized, a manifest can be the input to `osbuild`, just
like a bare pipeline can be. This means there will be no need
to pass in sources separately on the commandline.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Unify the github actions workflows under `tests.yml` and add an RPM build
job to match the one for osbuild.
Signed-off-by: Major Hayden <major@redhat.com>
We were verifying two things: if the passed distroArg exists in the
distribution mapping in common/types.go and if the it is an actually
registered distro. Since you cannot have distros registered that don't
correspond to a type, the first test is unnecessary.
Merge the two tests by moving the (much better) error message down into
the second test. This makes DistributionExists redundant, because
Registry.GetDistro() checks this implicitly.
Also, move ListDistributions() to the Registry object, because we want
to show distributions that are actually registered.
Add a test which checks that Registry.List() works and that all included
distributions register correctly.
systemd >= 240 sets this variable to `/var/cache/` + the value of
CacheDirectory. osbuild-composer must run on earlier versions though
(specifically RHEL 8.2).
This changes it to an int pointer so that the JSON will output null.
This means it needs to be checked for nil or for 0 in go.
0 is not a valid revision in the WELDR response, they always start at 1
and increment for each new revision tag so either way is a valid way
to indicate it isn't set.
This runs tests against a running API server, either lorax-composer or
osbuild-composer, and reports the results to stdout. It uses the
/run/weldr/api.socket to communicate with the server.
These tests build on the client functions to run integration tests on a
running API server.
It uses the reflect module to examine the methods attached to the
checkBlueprintsV0 struct and run the ones with names that start with
'Check', also checking the type signature of the functions and failing
the test if any of them don't match.
This will make it easier to add more checks without needing to add
boilerplate call/registration of the functions in the top level runner.
Just add the new function with the right name and signature and it will
be run when checkBlueprintsV0.Run() is called.
Checks for other API routes should be added to their own modules. There
will be some duplication of the Run function in each, but I think that
it will help keep things more manageable by separating them instead of
putting them all into a single giant Run() call.