This commit adds NewDefault() method to distroregistry that returns a slice
with all distributions supported by osbuild-composer. This way, there's only
one place where a distribution needs to be defined while its support
is being added to composer.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
My goal is to add a method to distroregistry to return Registry with
all supported distributions. This way, all supported distributions
would be defined only on one place.
To achieve this, the Registry must live outside the distro package
because the distro implementation depends on it and this would create
a circular dependency unsupported by Go.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The image definition is shared with the latest RHEL 8.y one (8.4 currently).
I expect that we the introduction of 8.5 support, we point the centos 8
distro at it.
The test repositories and manifests use the official CentOS composes. From
what I can tell, they are persistent. This is not guaranteed though, so we
might need to switch to RPMRepo at some point.
The "classic" CentOS 8 should also be buildable but due to the chicken and egg
issue (this commit will get into Centos "8.4" but Centos "8.4" isn't a thing
yet), we cannot test it and therefore it might be broken.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The previous code was smelling a bit (e.g. Server.server field) so I decided
to rewrite it in the style of the much nicer koji server.
Not a functional change.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
cockpit-composer can now build rhel 8.4 images. Our distro name for
rhel 8.4 is rhel-84 unlike prior rhel releases which fall
under the umbrella name rhel-8. rhel 8.4 still uses the same
repos as the rest of the rhel 8 releases but points to a different
nightly repo for testing purposes. Test cases are added. The changes
between rhel 8.3 and 8.4 are as follows:
There is now a hybrid boot partition scheme for x86_64. x86_64 images
now use uefi boot and have 3 gpt partitions: a small unformated
partition for mbr compatibility, an efi boot partition of type vfat, and
a root partition of type xfs. The packages grub2-efi-x64 and shim-x64
are added as bootloader packages for all x86_64 images.
For qcow2 images ro is added as a kernel option and the following
packages are added (+) or removed (-):
+ dosfstools
+ efi-filesystem
+ efivar
+ efivar-libs
+ grub2-efi-x64
+ shim-x64
- rhn-client-tools
- rhnlib
- rhnsd
- rhn-setup
Everybody hates the local workers. The first step of getting rid of them
is to split their socket out of osbuild-composer.socket - we need to keep
this one to support the Weldr API but the local worker socket can live in
its own file.
The behaviour should be the same for now: osbuild-composer.service always
starts the local worker socket.
However, this split allows the osbuild-composer executable to be run without
the Weldr API activated. The following commit explores this option more
in depth.
Note that the new socket can be used by root only because workers are always
run as root.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Soon, we want to begin tagging the jobs with the name of its submitter.
The simplest way to add a tag to a job is to put it into its type string.
However, as we don't know (and don't want to know) the submitters' names when
osbuild-composer is initialized, we need to be able to push arbitrary job
types into the jobqueue.
This commit therefore lifts the restriction that a jobqueue accepts only
a predefined set of job types. Now, jobqueue clients can push jobs of
arbitrary names.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Now that all interaciton with the koji API happens in the workers
we can drop koji configuration from composer itself. This means
that composer no longer needs to be provisioned with kerberos
credentials, and does not need to know about which koji servers
the workers support.
Most of the worker API is now untyped, but keep Enqueu() typed to
ensure the job objects match the names in the queue. This means we
must add a version of Enqueue() for each job type we support.
We no longer release into F31, and the right specfile was anyway not
being tested.
This allows us to remove a workaround that updates the VMs during
deploy, and other fedora-31 specific hacks.
This removes the osbuild-composer-cloud package, binary, systemd units,
the (unused) test binary, and the (only-run-on-RHEL) test in aws.sh.
Instead, move the cloud API into the main package, using the same
socket as the koji API, osbuild-composer-api.socket. Expose it next to
the koji API on route `/api/composer/v1`.
This is a backwards incompatible change, but only of the -cloud parts,
which have been marked as subject to change.
Instead, call it osbuild-composer-api.socket, but provide a symlink for
backwards compatibility. Change `schutzbot/provision.sh` to only enable
osbuild-composer-api.socket.
In the future, this new socket is the only API socket, which provides
both the "cloud" API and the one for koji.
This means that the koji API is always enabled.
Fedora 33 images can now be built and test cases are added for the new
images. The fedora 33 qcow2 and vmdk images are based off of the
official images and their kickstarters found here:
https://pagure.io/fedora-kickstarts. The fedora 33 iot image is based
off of the the config found here: https://pagure.io/fedora-iot/ostree.
The openstack, azure, and amazon image types have changes made to them
based off of the changes made to the qcow2. The changes between fedora
32 and fedora 33 are as follows:
Grub now loads its kernel command line options from
etc/kernel/cmdline, /usr/lib/kernel/cmdline, and /proc/cmdline instead
of from grub env. This is addressed by adding kernelCmdlineStageOptions
to use osbuild's kernel-cmdline stage to set these options. Alongside
`ro biosdevname=0 net.ifnames=0`, we also set `no_timer_check
console=tty1 console=ttyS0,115200n8` per what is set in the official
qcow2. For azure and amazon, the kernelOptions are still set as they
were in fedora 32.
The timezone is now set to UTC if a user does not set a timezone in the
blueprint customizations. Also, the hostname is set to
localhost.localdomain if the hostname isn't set in the blueprint.
Finally, the following packages have been removed:
polkit
geolite2-city
geolite2-country
zram-generator-defaults
Split the actual service into its own type `Composer` in composer.go.
main.go now (more or less) contains only collecting configuration from
the environment and the file system, as well as activation file
descriptors.
Aside from making the code easier to grok, this is a first step towards
running composer in a different environment than the one set up by
systemd.
The default values of fields in both ComposerConfig.Koji and
ComposerConfig.Worker are well-suited for how they're used.
The nil-checks in main.go only checked that the sections exist. This is
quite a weak check for validity, because the sections could be empty. If
anything is required for composer to function, we could add proper
validation in the future.
Do the same for the CA fields, which contain file names. Go has lots of
precedent for using empty strings to denote "no value" in the standard
library. Use it for CA files, too, instead of pointers.
The configuration file is API. Let's give it a bit more prominence to
help people treat it as such, and a chance to test it. A basic test is
included in this commit.
Also, this cuts down on the noise in main.go a bit.
Prior this commit, /etc/osbuild-composer/ca-crt.pem certificate was
used as an authority to validate client certificates.
After this commit, the host's trusted certificates are used to do
the validation. Ability to override this behaviour is also introduced:
In osbuild-composer config file, under koji and worker sections, a new CA
option is now available. If set, osbuild-composer uses it as a path
to certificate used to validate client certificates instead of the
default ones.
With this feature, it's possible to restore the validation behaviour
used before this change. Just put following lines in
/etc/osbuild-composer/osbuild-composer.toml:
[koji]
ca = "/etc/osbuild-composer/ca-crt.pem"
[worker]
ca = "/etc/osbuild-composer/ca-crt.pem"
This commit introduces a new test binary responsible for testing TLS
authentication.
Currently, it covers both remote worker API and Koji API. It tests that
the server refuses certificates issued by an untrusted CA or self-signed ones.
Also, it tests that the certificate is issued for an allowed domain.
TODO: certs with subject alternative name are currently not used in tests.
They should work just right, but a proper testing requires more tinkering with
OpenSSL than I'm willing to accept at this time
This commit adds a domain allowlist which works the same way as the one
for remote workers.
To accept just w1.osbuild.org and w2.osbuild.org, use:
[koji]
domain_allowlist = [ "w1.osbuild.org", "w2.osbuild.org" ]
Prior this change, the structure was following:
[koji.localhost.kerberos]
This change modifies it to:
[koji.servers.localhost.kerberos]
This allows us to put more config options under the koji section. See
following commits, they use this new possibility.
There's need for control which certificates to accept. This commit introduces
the domain allowlist. The basic idea is that composer accepts only
certificates issued to domain names specified in osbuild-composer config file.
It allows multiple domains to be specified.
To accept just w1.osbuild.org and w2.osbuild.org, use:
domain_allowlist = [ "w1.osbuild.org", "w2.osbuild.org" ]
Follow the worker API so we standardise on one library. This simplifies
the code quite a bit.
No functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In the same way we require authentication for the worker API, require
clients of the koji API to authenticate using SSL client certificates.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Add a systemd socket for Koji API. If enabled when osbuild-composer.service
is started, the service will also listen on the socket and serve Koji API
there.
Note that Koji API doesn't upload to Koji yet, this still needs to be hooked
up.
Based on a patch from Tom Gundersen, thanks!
We need the same RPMs to work equally well on a host running a beta
release (pulling beta content) as on a machine running GA (pulling GA
content). Detect this at run-time and point at the right repository.
Testing this is a bit hairy as we are building 8.3 images, but obviously
there is currently no 8.3 content at the GA URLs.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The osbuild-composer-rcm package was never finished, not in use and will be replaced by osbulid-composer-koji.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Previously, all the osbuild-composer tools must be run from a directory with
dnf-json. This was often confusing, especially with the dnf-json-tests. This
commit changes the path to be absolute, so this is no longer an issue.
RPMMD had hardcoded path to dnf-json helper. This required all executables
using RPMMD to be run in the directory where dnf-json was located. This commit
makes RPMMD take the path to dnf-json as an argument. This allows its
consumers to specify whichever path they want.
Not a functional change
The `jobs/:job_id/builds/:build_id/image` route was awkward: the
`:jobid` was actually weldr's compose id and `:build_id` was always `0`.
Change it to `jobs/:job_id/artifacts/:name`, where `:job_id` is now a
job id, and `:name` is the name of the artifact to upload. In the
future, it could support uploading more than one artifact.
This allows removing outputs from `store`, which is now back to being a
pure JSON-store. Take care that `weldr` returns (and deletes) images
from the new (or for backwards compatibility, the old) location.
The `org.osbuild.local` target continues to exist as a marker for the
worker to know whether it should upload artifacts.
As it turns out, the default expectation is not to distinguish between
these. We will now produce whatever is the most recent minor release by
default, and image tests will still be pinned at a given snapshot to be
reproducible.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This reduces the amount of resolving and error checking we have to do.
This exposed a bug in weldr's ComposeEntry type, which will be fixed in
a follow-up commit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This makes the queue more type safe and allows to get rid of the
`pendingChannel` and `pendingChannels` helpers, which only existed to
create not-yet-existing pending channels.
The store is responsible for two things: user state and the compose queue. This
is problematic, because the rcm API has slightly different semantics from weldr
and only used the queue part of the store. Also, the store is simply too
complex.
This commit splits the queue part out, using the new jobqueue package in both
the weldr and the rcm package. The queue is saved to a new directory `queue/`.
The weldr package now also has access to a worker server to enqueue and list
jobs. Its store continues to track composes, but the `QueueStatus` for each
compose (and image build) is deprecated. The field in `ImageBuild` is kept for
backwards compatibility for composes which finished before this change, but a
lot of code dealing with it in package compose is dropped.
store.PushCompose() is degraded to storing a new compose. It should probably be
renamed in the future. store.PopJob() is removed.
Job ids are now independent of compose ids. Because of that, the local
target gains ComposeId and ImageBuildId fields, because a worker cannot
infer those from a job anymore. This also necessitates a change in the
worker API: the job routes are changed to expect that instead of a
(compose id, image build id) pair. The route that accepts built images
keeps that pair, because it reports the image back to weldr.
worker.Server() now interacts with a job queue instead of the store. It gains
public functions that allow enqueuing an osbuild job and getting its status,
because only it knows about the specific argument and result types in the job
queue (OSBuildJob and OSBuildJobResult). One oddity remains: it needs to report
an uploaded image to weldr. Do this with a function that's passed in for now,
so that the dependency to the store can be dropped completely.
The rcm API drops its dependencies to package blueprint and store, because it
too interacts only with the worker server now.
Fixes#342
This package does not contain an actual queue, but a server and client
implementation for osbuild's worker API. Name it accordingly.
The queue is in package `store` right now, but about to be split off.
This rename makes the `jobqueue` name free for that effort.
weldr needs to know the host architecture. Rather than pinning
a string, pin a real Arch object, and query its name when we
need it.
This verifies the validitiy of the architecture for the given
distro before it is passed to weldr, rather than lazily on
demand.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Require the caller to pass in the required distros explicitly. This
would allow us to easily add distros in osbuild-pipeline and tests
before exposing them in composer itself, for instance.
This means there is no longer a dependency from the distro package
to each of the individual distros, so the distros are now able
to depend on the distro packag for types and interfaces.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Mixing the way to build a distribution with where to get the source
packages from is wrong: it breaks pre-release repos, local mirrors, and
other use cases. To accommodate those, we introduced
`/etc/osbuild-composer/repositories`.
However, that doesn't work for the RCM API, which receives repository
URLs to use from outside requests. This API has been wrongly using the
`additionalRepos` parameter to inject those repos. That's broken,
because the resulting manifests contained both the installed repos and
the repos from the request.
To fix this, stop exposing repositories from the distros, but require
passing them on every call to `Manifest()`. This makes `additionalRepos`
redundant.
Fixes#341
Only the weldr API has the concept of a default distro. Pass that distro
explicitly to `PushCompose()` and fetch the distro from the compose in
all other functions that accessed Store.Distro.
`ComposeRequest` included a `common.Distribution`, which had to be
resolved in PushComposeRequest. Use a real `distro.Distro` object here,
and push resolving it to the rcm package.
Change the `Distribution` on the (lower-case) `composeRequest` to a
string. This struct represents the incoming request. Since we're now
resolving the real distro object from the registry in the same function,
it seems redundant to validate the incoming distro twice.