Support for creating multiple amis from a single compose. It uses the
AWSEC2* jobs to push images to new regions, and share them with new
accounts.
The compose it depends upon has to have succeeded.
See https://github.com/BurntSushi/toml/issues/360
A recent change in BurntSushi/toml made encoding fail (later changed to
error) if a struct is marked as omitempty and is comparable. Go docs about
equality: https://go.dev/doc/go1#equality. Basically: A struct is comparable
if all of its fields are comparable. Slices are not comparable.
Customizations are marked as omitempty but they contain a lot of slices,
thus they are not comparable. The new version of BurntSushi/toml therefore
panics when we encode them.
The solution is to remove the omitempty tag from Customizations.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
When running an osbuild job, we read `/etc/redhat-release` to get the
host OS name to attach as metadata to the job result.
Only Fedora and RHEL ship this file, which makes the osbuild job always
fail on other distributions.
The main reason to report host OS back to the worker server is due to
Koji composes and the koji-finalize job, which pushes it to Koji. The
motivation is to have enough information to potentially re-instantiate
/ identify the original builder host OS. There are no specific
requirements on the string.
Modify the code to use `/etc/os-release` to determine the host OS. Fall
back to using `linux` as the host OS, in case reading `os-release`
fails, log the error and continue with the job. The `linux` fallback is
suggested by the `os-release` spec [1]
[1] https://www.freedesktop.org/software/systemd/man/os-release.html#ID=
Co-authored-by: Achilleas Koutsou <achilleas@koutsou.net>
The search command is more complicated than depsolve and dump. It needs
to return results based on the requested package names and globs.
Add a number of mock responses for the new search command, including
search results, all packages, and error responses that are triggered by
using special package names: nonexistingpkg, badpackage1, baddepsolve.
Instead of fetching all available packages from dnf-json and then
searching the results this uses SearchMetadata when a package name or
glob is passed to the API. It only uses FetchMetadata when fetching
the full list of packages.
This also fixes a bug where the error response to a projects/info
request used the id of 'ModulesError'. It now uses 'ProjectsError'.
Since the oscap remediation stage in osbuild runs
the oscap package in `chroot`, it is necessary to
install the `openscap-scanner` package to the image
itself rather than the build root.
The module is not present in official RHEL-9.1 ISO image and it is
causing boot issues when used with newer content. HTTP boot is
not affected by this change and works as expected.
Having the GPG check enabled for Google repos in `gce*` images will make
DNF try to import the relevant keys when upgrading, downgrading or
installing any packages from the repo. However due to Google still using
SHA-1 for GPG keys used to sign their RPMs, importing it will make any
transaction that includes such RPM to fail.
Disabling the GPG check will ensure that DNF won't attempt to import
Google GPG keys.
Related to https://issuetracker.google.com/issues/223626963
The repo is not needed any more, because the Google Cloud SDK is not
installed in the images by default. If anyone wants to install the SDK,
they can add the appropriate repo definition.
The repo is not needed any more, because the Google Cloud SDK is not
installed in the images by default. If anyone wants to install the SDK,
they can add the appropriate repo definition.
The Google SDK ships pre-compiled binaries. It is undesirable to install
it by default in `gce` and `gce-rhui` in its current shape. Also not
installing it does not anyhow affect the RHEL integration as the guest
OS in GCP.
The Google SDK ships pre-compiled binaries. It is undesirable to install
it by default in `gce` and `gce-rhui` in its current shape. Also not
installing it does not anyhow affect the RHEL integration as the guest
OS in GCP.
Extract the application into a utility method on `PartitionTable`.
In order for it to be usable for the first and second pass it does
take a `create` argument that controlls whether new partitons will
be created or return.
Since the LVM support was added to all distros, our disk
related code is adaptive, i.e. we will set the correct BLS
and grub2 prefix if there a `boot` partiton is present in
the layout after all customizations happen, which includes
LVMification.
One thing that was not yet fully working was layouts that
do not yet have a `/boot` partition but allow LVMification.
In that case `NewPartitionTable` and if `/boot` was the
first (or only) customization, would LVMify the partition
which in turn would create the `/boot` partition; but after
`newPT.ensureLVM()` the call to `newPT.createFilesystem`
with `/boot` would try to create another `/boot` mountpoint.
In order to deal with this situation correctly we are now
using a two phase approach: 1) enlarge existing mountpoints
and collect new ones. 2) if there are new ones and LMVify
was allowed, switch to LVM layout. Do a second pass and now
create or enlarge existing partitions, handling `/boot` in
the process.
Replace the simple allow list of paths with the more sophisticated
path policies. It enables us to e.g. allow one path but not any
sub-path. This will be useful for `/boot` where we want to allow
its customization but not any sub-path because that might actually
break booting.
Build a new path policy struct, ased on the new path trie struct.
It is designed to be able to store policies for paths. A Check
method can then be used to look up the policy for a given path
based on the defined policies.
Add a simple implementation of a path trie structure that can be
used to look up assoicated data for any given path. The constructor
will build the trie from a dict of paths to associated data. Later
modification is currently not support. Add tests for it creation
and lookup.
Add basic validation to ensure that the oscap
customizations are valid and required fields
have been provided. The validation also ensures
that the manifest generation errors out if
oscap customization has been enabled for older
or unsupported distros.
Add a package with the constants of the
valid oscap profiles. Add a function to
validate the available profiles against
an allow map of supported profiles. The
allowed function checks for both exact
matches and shorthand versions of the
oscap profiles.
Add support for embedding container images via the cloud API. For
this the container resolve job was plumbed into the cloud api's
handler and the API specification updated with a new `containers`
section that mimics the blueprint section with the same name.
Dependency errors are not set by the workers, they're not set directly
in the job result. They are added by the worker server in case the job
error indicates it's a dependency error.
The ellipsis operator was used as a hack to not need to pass any details
as an argument, but it makes what the end object will actually look like
less obvious. It also makes it impossible to pass an array to details
without getting a nested array.
Fixes#2874
If the user does not pass a name, use the distribution as a name
A provided tag is used only if name is provided. It
The tag's default is a generated using UUID to avoid collisions
Worker
------
Add configuration for the default container registry.
Use the default container registry if not provided as part
of the image name.
When using the default registry use the configured values
Return the image url as part of the result.
Composer Worker API
-------------------
Add `ContainerTargetResultOptions` to return the image url
Composer API
------------
Add UploadOptions to allow setting of the image name and tag
Add UploadStatus to return the url of the uploaded image
Co-Developed-By: Christian Kellner <christian@kellner.me>