Convert weldrcheck to use the standard go testing framework along with
the github.com/stretchr/testify/require assert package.
This also removes the cmd/osbuild-weldr-tests and builds the test binary
directly from the weldrcheck package. This makes it easier to organize
the code instead of putting it all into a single main_test.go file.
By default, image test executable runs only test cases for the same distro
as the host's one. On Travis there's Ubuntu, so we need to adjust the
behaviour and run the cases for a distro specified by command line
arguments.
We need to use different values for path constants when running the tests
on the Travis CI. This is the first step to achieve this.
Note that this commit may be reverted when Travis CI is dropped.
Use $STATE_DIRECTORY environment variable which is set by systemd
because we use: StateDirectory=osbuild-composer in the service unit.
also change systemd unit to include STATE_DIRECTORY, because
RHEL comes with older systemd version, so we need to set this variable explicitly.
This makes two changes simultaneously, to avoid too much churn:
- move accessors from being on the blueprint struct to the
customizations struct, and
- pass the customizations struct rather than the whole blueprint
as argumnet to distro.Manifest().
@larskarlitski pointed out in a previous review that it feels
redundant to pass the whole blueprint as well as the list of
packages to the Manifest funciton. Indeed it is, so this
simplifies things a bit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This was never actually used anywhere, as passing it to dnf-json
was a noop.
We may want to reconsider the concept of a source/repo name and
how it differs from an ID, but for now drop the name.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This commit makes the osbuild-image-tests binary doing the same set of tests
like the old test/run script.
Changes from test/run:
- qemu/nspawn are now killed gracefully. Firstly, SIGTERM is sent.
If the process doesn't exit till the timeout, SIGKILL is sent.
I changed this because nspawn leaves some artifacts behind when killed
by SIGKILL.
- the unsharing of network namespace now works differently because of
systemd issue #15079
Make the bluprint parameter a bool, and if set, then read a
blueprint from stdin, otherwise an empty blueprint is used.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This allows us to take advantage of the `testing` package. It also gives
the resulting test binary common command line arguments (same as `go
test`).
Tests need to be compiled with `go test -c`, which injects a `Main()`
that calls the Test* functions.
This is not supported by the golang rpm macros. Thus, build this binary
by calling `go test -c` directly, but taking care to pass the same
linker flags as the `%gobuild` macro.
Mark the test binary with the `integration` build constraint, so that
`go test ./...` doesn't pick them up. That's only for unit tests.
The idea is to move all other test binaries to this scheme as well.
Spec file changes by Lars Karlitski <lars@karlitski.net>
A developer may want to use the output of rpmmd (build package specs,
package specs, and checksums) instead of the pipeline manifest. In this
case they may pass the -rpmmd flag to osbuild-pipeline. With this flag,
instead of returning the pipeline, it will return the output of rpmmd.
The purpose of this documentation is to describe how the user is
expected to work with the RCM API. It can also serve as an example for
creating automation scripts if the RCM teams wants to create some.
This converts the client and weldrcheck functions to use http.Client for
connections instead of passing around the socket path and opening it for
each test.
It is an equivalent to what we already have for Weldr API but this one
is for the RCM API. It should test the expected use cases:
* submit a compose
* get a status
A manifest is struct made up of a pipeline and a sources object. So
far all our sources objects are empty, but we have moved from
using pipelines to manifests everywhere, in preparation for
generating pipelines that require sources.
Make the same change in the test cases.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Some `composer-cli` commands operate on the current directory, for
example saving blueprints or images. Add the `TemporaryWorkDir` type,
which helps changing to a temporary working directory and cleaning
everything up after.
The alternative would have been to pass a working directory to all calls
of `runComposerCLI`, but that seemed like it would require a lot of
change, which I didn't deem worth for testing code.
This is not a behavioral change, as all distros currently use
empty source objects. But when we move over to rpm-based pipelines,
this will change.
Make the same change to osbuild-pipeline, so these stay in sync.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We were verifying two things: if the passed distroArg exists in the
distribution mapping in common/types.go and if the it is an actually
registered distro. Since you cannot have distros registered that don't
correspond to a type, the first test is unnecessary.
Merge the two tests by moving the (much better) error message down into
the second test. This makes DistributionExists redundant, because
Registry.GetDistro() checks this implicitly.
Also, move ListDistributions() to the Registry object, because we want
to show distributions that are actually registered.
Add a test which checks that Registry.List() works and that all included
distributions register correctly.
Return errors from all distro's New() functions instead of logging and
returning nil. Also, return errors instead of panicking from
NewRegistry() and NewDefaultRegistry().
WithSingleDistro() doesn't follow go's naming convention for creating
objects (New*). Rename it to NewRegistry() and rename the old
NewRegistry() to NewDefaultRegistry().
The idea is that NewRegistry() can be used to create full Registry
objects from outside the package. NewDefaultRegistry() is a convenience
function that creates a Registry with all known distros.
rpmmd looked at the CACHE_DIRECTORY environment variable to set a path
for the dnf repository cache. Aside from being a smelly thing to do
from a library, this breaks osbuild-pipeline and osbuild-dnf-json-tests,
which don't run as systemd services and thus don't have CACHE_DIRECTORY
set.
Explicitly pass the cache directory to rpmmd. Keep using a path based on
CACHE_DIRECTORY for osbuild-composer. Use the user's `.cache` directory
for osbuild-pipeline and a temporary directory for the tests.
./test/run test suite has served us well over the last months. However,
there is currently a major effort to run the better defined integration
test suite on a CI. Nonetheless, two very important parts are still missing
from the integration test suite: inspecting the image with image-info
and booting the image. This commit begins the work on this matter by porting
a part of ./test/run suite to Go. Currently, only image-info tests work, the
rest will come in the following commits.
comparing to lorax-composer test suite only ext4-filesystem and
partitioned-disk are built without asserting anything other than
the build succeeds. For the rest of the images we usually try to
boot them and verify the resulting VM works somehow.
Right now the implementation expects the RCM socket to live in the same
unit file as other osbuild-composer sockets. This would require a
solution where we ship the osbuild-composer.socket in two different
versions: one for regular usage, one for rcm. But that is very
inconvenient and it would probably require some weird scriptlets (and
scriptlets are bad!).
After this change, the RCM API socket lives in a separate file and only
if the socket unit is activated, the API runs. The unit file itself was
introduced in previous commits.
There's a usecase for running workers at a different machine than
the composer. For example when there's need for making images for
architecture different then the composer is running at. Although osbuild has
some kind of support for cross-architecture builds, we still consider it
as experimental, not-yet-production-ready feature.
This commit adds a support to composer and worker to communicate using TCP.
To ensure safe communication through the wild worlds of Internet, TLS is not
only supported but even required when using TCP. Both server and client
TLS authentication are required. This means both sides must have their own
private key/certificate pair and both certificates must be signed using one
certificate authority. Examples how to generate all this fancy crypto stuff
can be found in Makefile.
Changes on the composer side:
When osbuild-remote-worker.socket is started before osbuild-composer.service,
osbuild-composer also serves jobqueue API on this socket. The unix domain
socket is not affected by this changes - it is enabled at all times
independently on the remote one. The osbuild-remote-worker.socket listens
by default on TCP port 8700.
When running the composer with remote worker socket enabled, the following
files are required:
- /etc/osbuild-composer/ca-crt.pem (CA certificate)
- /etc/osbuild-composer/composer-key.pem (composer private key)
- /etc/osbuild-composer/composer-crt.pem (composer certificate)
Changes on the worker side:
osbuild-worker has now --remote argument taking the address to a composer
instance. When present, the worker will try to establish TLS secured TCP
connection with the composer. When not present, the worker will use
the unix domain socket method. The unit template file osbuild-remote-worker
was added to simplify the spawning of workers. For example
systemctl start osbuild-remote-worker@example.com
starts a worker which will attempt to connect to the composer instance
running on the address example.com.
When running the worker with --remote argument, the following files are
required:
- /etc/osbuild-composer/ca-crt.pem (CA certificate)
- /etc/osbuild-composer/worker-key.pem (worker private key)
- /etc/osbuild-composer/worker-crt.pem (worker certificate)
By default osbuild-composer.service will always spawn one local worker.
If you don't want it you need to mask the default worker unit by:
systemctl mask osbuild-worker@1.service
Closing remarks:
Remember that both composer and worker certificate must be signed by
the same CA!
This is needed for unit tests, because it wasn't possible to mock the
rpmmd module before. This also requires that the checksum is moved to
the compose request and evaluated in the endpoint handler instead of
push compose. I think it makes sense to have the checksum in the compose
request directly.
Also a "module platform ID" is required now, but we don't have the
"global" distribution any more, so this patch introduces mapping from a
distribution to the module platform ID.
the name was misleading because the function could do more than just
download package list. In PushComposeRequest it is also used to fetch
checksums for the repositories, therefore I decided to rename it to
reflect this usage.
When creating a pipeline with the default image size, the size should no
longer be set to 0. Instead, the size is fetched using the distro
function GetSizeForOutputType which can return the default image size
for a given image type. This size can then be passed into the pipeline.
This is unused for now, but will allow us to generate pipelines with
the pre-depsolved NEVRAs, so osbuild does not need to depsolve again.
Signed-off-by: Tom Gundersen <teg@jklm.no>