Since the recent commit (04db4fa) we are no longer provider
of lorax-composer. This was needed because when installing cockpit-composer,
which depends on lorax-composer, dnf chose us as the lorax-composer provider
instead of the original lorax-composer package. As we are not yet ready to
fully replace lorax-composer, this broke the cockpit-composer tests.
This commit is the first part of long term fix. Osbuild-composer is now
a provider of weldr. Soon, lorax-composer will also be a provider of
weldr. After that, cockpit-composer will be switched to be depending
on weldr. Also, it will suggest lorax-composer, which will force dnf
to use lorax-composer as the default weldr backend. If someone wants
to experiment with osbuild-composer, it will be possible, because
it will also be a provider of weldr and there will be no need to
install lorax-composer to satisfy cockpit-composer dependencies.
We're not fully compatible with lorax-composer's API yet, and we don't
provide lorax-composer's systemd unit files.
As a result, cockpit-composer's integration tests fail, because `dnf
install lorax-composer` on Fedora can result in installing
osbuild-composer in some cases.
Prior to this patch, `make rpm` would produce rpms that have the latest
tag as their versions. This was confusing, because one could never know
which contents are in a locally built rpm.
Change this so that the is version always based on the commit hash of
HEAD. This is easy: the golang macros read a `%commit` macro when it
exists and do this for us.
To simplify more, only define `%_topdir` to ./rpmbuild and use
rpmbuild's known directory structure (SPEC, SOURCES, RPMS, ...)
otherwise, to make it easier to find build results.
Build the specfile, tarball, source rpms, and rpms with `make rpm`,
without separate sub-targets. We can reintroduce them if they're needed
somewhere.
Also remove the `check-working-directory` target. It should be clear
from the output that only the currently-committed files are included,
because the resulting tarball and rpms contain the commit hash. Without
the check, one can work on the Makefile without having to commit all the
time, for example ;)
If the user creates a new blueprint with no version specified, the
blueprint struct uses "0.0.0" as the default version. Blueprint tests
for a blueprint with an empty version now expect no error.
This is not a behavioral change, as all distros currently use
empty source objects. But when we move over to rpm-based pipelines,
this will change.
Make the same change to osbuild-pipeline, so these stay in sync.
Signed-off-by: Tom Gundersen <teg@jklm.no>
For now, this simply wraps Pipeline and Sources, and retruns the
resulting manifest object. In the future, Pipeline and Sources
may be dropped from the interface.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A manifest is simply a struct containing a sources and a pipeline
object. We want to store and transfer pielines always with their
sources, and will use the manifest for this.
When serialized, a manifest can be the input to `osbuild`, just
like a bare pipeline can be. This means there will be no need
to pass in sources separately on the commandline.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Unify the github actions workflows under `tests.yml` and add an RPM build
job to match the one for osbuild.
Signed-off-by: Major Hayden <major@redhat.com>
We were verifying two things: if the passed distroArg exists in the
distribution mapping in common/types.go and if the it is an actually
registered distro. Since you cannot have distros registered that don't
correspond to a type, the first test is unnecessary.
Merge the two tests by moving the (much better) error message down into
the second test. This makes DistributionExists redundant, because
Registry.GetDistro() checks this implicitly.
Also, move ListDistributions() to the Registry object, because we want
to show distributions that are actually registered.
Add a test which checks that Registry.List() works and that all included
distributions register correctly.
systemd >= 240 sets this variable to `/var/cache/` + the value of
CacheDirectory. osbuild-composer must run on earlier versions though
(specifically RHEL 8.2).
This changes it to an int pointer so that the JSON will output null.
This means it needs to be checked for nil or for 0 in go.
0 is not a valid revision in the WELDR response, they always start at 1
and increment for each new revision tag so either way is a valid way
to indicate it isn't set.
This runs tests against a running API server, either lorax-composer or
osbuild-composer, and reports the results to stdout. It uses the
/run/weldr/api.socket to communicate with the server.
These tests build on the client functions to run integration tests on a
running API server.
It uses the reflect module to examine the methods attached to the
checkBlueprintsV0 struct and run the ones with names that start with
'Check', also checking the type signature of the functions and failing
the test if any of them don't match.
This will make it easier to add more checks without needing to add
boilerplate call/registration of the functions in the top level runner.
Just add the new function with the right name and signature and it will
be run when checkBlueprintsV0.Run() is called.
Checks for other API routes should be added to their own modules. There
will be some duplication of the Run function in each, but I think that
it will help keep things more manageable by separating them instead of
putting them all into a single giant Run() call.
Currently the responses are all embedded in the weldr API functions.
They need to be usable by the client helper functions so I'm copying
them here and giving them names.
A later commit will go through the API can refactor it to use these
instead of the embedded ones.
This package will contain functions for communicating with the API that
can be used in both the integration tests as well as in a future cmdline
tool similar to composer-cli
When possible the client functions will return the same structures used
by weldr/api.go which have been exported in weldr/json.go
Return errors from all distro's New() functions instead of logging and
returning nil. Also, return errors instead of panicking from
NewRegistry() and NewDefaultRegistry().
These packages (and their tests) shouldn't access the distro package,
because that's cyclic.
Also, these packages should only test the objects they expose.
WithSingleDistro() doesn't follow go's naming convention for creating
objects (New*). Rename it to NewRegistry() and rename the old
NewRegistry() to NewDefaultRegistry().
The idea is that NewRegistry() can be used to create full Registry
objects from outside the package. NewDefaultRegistry() is a convenience
function that creates a Registry with all known distros.
rpmmd looked at the CACHE_DIRECTORY environment variable to set a path
for the dnf repository cache. Aside from being a smelly thing to do
from a library, this breaks osbuild-pipeline and osbuild-dnf-json-tests,
which don't run as systemd services and thus don't have CACHE_DIRECTORY
set.
Explicitly pass the cache directory to rpmmd. Keep using a path based on
CACHE_DIRECTORY for osbuild-composer. Use the user's `.cache` directory
for osbuild-pipeline and a temporary directory for the tests.
We want depsolving via dnf-json, followed by rpm installation to be
the same as installing directly with dnf. However, the `install_set()`
helper we used inserts the list of packgaes into a set internally
before returning it to us to iterate. Set order iteration is not
a FIFO in python, and because the order of package installation
in rpm is only a partial order, we ended up with different images
depending on whether we installed through dnf or dircetly via rpm.
To avoid the indirection via a set, open-code `install_set()` without
the intermediate allocation.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Prior this commit installing the worker sub-packages shows the following
warning:
Failed to preset unit: Unit file osbuild-worker@.service does not exist.
Moving the unit file to the sub-package fixes it.
Images can be built for rhel 8.1. The pipeline generation and distro
tests are based off of the rhel 8.2 ones. Repository information as been
added for rhel 8.1. The repo urls are internal ones and will only work
if the user is on the Red Hat vpn.
./test/run test suite has served us well over the last months. However,
there is currently a major effort to run the better defined integration
test suite on a CI. Nonetheless, two very important parts are still missing
from the integration test suite: inspecting the image with image-info
and booting the image. This commit begins the work on this matter by porting
a part of ./test/run suite to Go. Currently, only image-info tests work, the
rest will come in the following commits.
If no packages are included in a blueprint, the slice remains `nil`,
which translates to `null` in json. Always initialize the slice by
pointing it to an empty array.
This can happen when CacheDirectory= is missing from the service file.
That's unlikely to happen, but it's hard to figure out what caused the
failure when it does. Be explicit and panic.
This changes osbuild-composer's behavior to match lorax-composer when
encountering invalid versions. Instead of leaving them as-is it will
return a BlueprintError explaining the problem. eg.
"errors": [
{
"id": "BlueprintsError",
"msg": "Invalid 'version', must use Semantic Versioning: is not in dotted-tri format"
}
]
This is enforced on new blueprints (including the workspace). If a
previously stored blueprint has an invalid version and a new one is
pushed it will use the new version number instead of trying to bump the
invalid one.
This also moves the version bump logic into blueprint instead of store,
and adds an Initialize function that will make sure that the blueprint
has sane default values for any missing fields.
This includes tests for the Initialize and BumpVersion functions.
This adds returning errors from the store PushBlueprint* functions, and
adds handling of the errors to the API code in preparation for new code
to check the blueprint before saving it.