The AWS and Azure RHUI images are produced as compressed archives, which
can be uploaded to Koji, but they can't be uploaded to the cloud
provider in this format. To support cloud upload for these types of
images, we need to decompress them before the upload.
Add a workaround for AWS and AzureImage targets to check if the image
has `.xz` suffix and if yes, decompress it before uploading to cloud.
This workaround is needed until image definitions will support and use
multiple exports per image to allow using different export per upload
target.
Add support to handle upload options in image requests for Koji
composes. The image is always uploaded to Koji, but now it can be
uploaded to the cloud environment in addition to Koji as part of the
build.
The image name used for Koji image can't be used as is for uploading to
the cloud, because each cloud provider has its own requirements for the
valid characters. For now, let the Cloud API implementation generate a
random image name. The name is always returned in the compose status's
upload status, so it should be possible to attach it to the Koji build
to allow users to find the image.
Enhance the `koji-finalize` job implementation to be able to cope with
multiple upload targets being specified for an `OSBuildJob`.
Implement a convenience method `OSBuildJobResult.TargetResultsByName()`
for filtering the target results attached to the job result by their
name. Cover the method with an unit test. And lastly use this method in
the `koji-finalize` job to find the appropriate Koji upload target
results.
This is a preparation for enabling cloud uploads for Koji composes.
Enhance the `koji-finalize` job implementation to use deferred function
to ensure that the job status is always reported back to the composer.
In addition, if the `JobError` is set, also fail the Koji job.
Previously, composer and Koji were not updated in some corner cases when
the job would fail.
Extend the `koji.sh` test case to allow also testing the upload to
cloud, in addition to the testing that it supports currently (building
of multiple images in one Koji compose request).
The script now reuses some common functions used by the `api.sh` test
case. Once the Koji compose succeeds, the script verifies that the image
is present in the appropriate cloud environment using a CLI tool. No
additional testing of the image is done, it is not booted.
Extend the `tools/koji-compose.py.sh` script to allow also testing the
upload to cloud, in addition to the testing that it supports currently.
If only the `DISTRO` and `ARCH` arguments are passed to the script, it
submits a new Koji compose with two image requests, as it always did.
If a `CLOUD_TARGET` and `IMAGE_TYPE` arguments are provided in addition
to `DISTRO` and `ARCH`, then the script submits a new Koji compose with
a single image request, which has the upload options set to make the
image be uploaded to cloud.
Supported cloud targets are:
- `aws`
- `azure`
- `gcp`
The image types are those that are accepted by the Cloud API. The script
does not check at all if the provided combination of the cloud target
and image type is valid and submits anything that it gets to composer.
Modify the `tools/koji-compose.py` script to print all log messages to
STDERR and to print only the Koji compose ID to STDOUT. This way, the
caller of the script can easily get the ID of the compose created by the
script and use it later.
Now that we have enabled container embedding on RHEL 8, let's
also test it there.
We also pin it for Fedora and RHEL/CS 9 to be able to use the
new `org.osbuild.containers.storage.conf` stage.
Support for adding containers in non-ostree images. The reason we
don't support OSTree artefacts just yet is that the default storage
location for container is `/var/lib/containers/storage`. But for
OSTree images all content in `/var` is discarded, since that is
deployment specific data. We therefore need to store the containers
somewhere else, e.g. `/usr/share/containers/storage`, but then also
need to configure the system to find containers in that location.
osbuild only recently gained the corresponding stage to do so and
thus this will be done in a follow up.
Add a new test case that embeds an existing container store in our
gitlab ci registry into a qcow2 image. It uses `image-info` to
verify that the container, with the expected id, is indeed embedded
in the resulting image.
Add support for reporting the install container images in an image.
NB: this does not use `podman` but reads the overlay storage
directly and therefore does currently not take additional image
locations or different storage drivers into account. For now this
is not a problem since we don't support any of that.
Support for adding containers in non-ostree images. The reason we
don't support OSTree artefacts just yet is that the default storage
location for container is `/var/lib/containers/storage`. But for
OSTree images all content in `/var` is discarded, since that is
deployment specific data. We therefore need to store the containers
somewhere else, e.g. `/usr/share/containers/storage`, but then also
need to configure the system to find containers in that location.
osbuild only recently gained the corresponding stage to do so and
thus this will be done in a follow up.
Add a new `containers` section that can be used to request the
embedding of containers into images. The only requirement is
the source property to specify where to fetch the container from.
This suppports specifying the digest of the container or the tag.
In case none is given it defaults to the `latest` tag. The `Name`
field can be used to optionally specify a name to use inside the
image.
NB: currently no tools or apis support container resolution yet.
This follows in the next commits.
This is the first step to support embedding container images. Here
we add the `containers []container.Spec` argument to supply images
with resolved container specifications. For now all distros will
return an error in case a container is actually supplied since none
of them currently support embedding containers. NB: also no apis or
tools will actually resolve containers.
Add bindings for the `org.osbuild.skopeo` that can be used to copy
container images, accessed via the `org.osbuild.containers` input,
into images.
The constructor is designed with ease of use in mind and takes
the needed container inputs and the storage path option, i.e.
where to store the container in the images.
Add bindings for `org.osbuild.conainer` inputs which can be used to
supply containers to stages. Currently only fetching containers via
sources is supported.
Add a new class `container.Resolver` which can be used to resolve
multiple container images to their respective ids in parallel.
It should make it easy for all existing tools and api endpoints
to adpot container resultion.
Create a small only mock container registry to test `Client`.
Currently the registry is read-only and thus cannot be used
for upload tests but it can and will be used for container
resolution checks.
Add a new `Resolve` method to `Client` that will resolve its `Target`
to the corresponding manifest digest id and its corresponding iamge
identifier. The former can be used in the URL to fetch a specific
image from the registry via `<name>@<digest>` and the latter uniquely
identifies a container image via the hash of its configuration object.
This should stay the same across pulls and is also the id returned via
`podman pull` and `podman images`.
Since (most) container images are OS and architecture specific a tag
often points to a manifest list that contains all available options.
Therefore the resolve operation needs to choose the correct arch for
image. A new pair of getters `Set{Architecture,Variant}Choice` lets
the user control which architecture/variant is selected during the
resolution process.
Ensure that the `Client.AuthFilePath` points to a sane location,
which here means that the location is either accessible by the
current user or does not exist. This is because any other error
opening the auth file with lead to a overall failure when trying
to access container registries, even if the target resources is
public.
The reason we have to set it ourselves is that by default the
containers library looks in a sub-path of `XDG_RUNTIME_DIR` and if
that variable is not set it falls-back to `/run/containers/<uid>`.
Since `XDG_RUNTIME_DIR` is indeed not set for the composer process
started via systemd, it will fall-back, but it does not have access
to `/run/containers` and finding the authorization info for any
request will fail with "permission denied".
Add a setter so that we can set the `Client.AuthFilePath` to a
different location than the default one.
Instead of keeping an extra field in `Client`, we just use the
existing `sysCtx.DockerAuthConfig` structure. When the context
is later copied during the upload operation the credentials
will be copied as well. It also saves us from syncing the
credentials if we directly use said `sysCtx` for operations.
Instead of having an extra field, `TlsVerify`, on the `Client` and
then later setting the corresponding `SystemContext` options, use
the existing `SystemContext` field of `Client`. The corresponding
field is a tri-state: unset, true, false, which is represented as
a pointer to boolean in the `Client`'s new getter and setter. This
also inverts the boolean logic from verify TLS to skip TLS which
aligns very well with the corresponding fields in the upload target
struct.
In addition we properly capitalize some existing variables.
This prepares the usage of the `internal/container` from composer
directly, as opposed to the existing use in the worker. Said pkg
uses the `containers/image/v5`, which uses `proglottis/gpgme` and
the latter needs the gpgme C library. We therefore install it and
its dependencies.
The go package `proglottis/gpgme` a dependency of `containers/image/v5`
package uses `libgpgme`. In the near future `internal/container`, which
depends on `containers/image/v5`, will be used directly in composer and
thus we need to install the `gpgme` devel package and its build deps.
This test is compiling `gen-manifests` via `go run` and thus needs
to pick up build requirements for the source. Instead of manually
installing the go toolchain use the `dnf build-dep` command on the
spec file so we pick up current and future build dependencies.
Koji API removed by the previous commit was the last user of osbuild-koji job.
Let's remove it since nothing uses it. This also removes all of the
compatibility code in Cloud API, see concerns below:
Compatibility concerns:
- the internal deployment was moved to a completely different composer
instance, thus there are no old jobs
- Fedora deployment is still unused in prod, thus we don't care about keeping
backward compatibility of the old jobs
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We no longer use it, let's remove it. If you are wondering what to use instead,
use Cloud API. It supports everything that Koji API supported and more.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This test tested two things:
1) Invalid route - this is already covered by TestUnknownRoute
2) Invalid UUID in the compose status route - this is now covered by
TestComposeStatusInvalidUUID
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Fedora 34 is EOL, let's remove all traces of it, including:
- distro definition
- repositories (and test one)
- test manifests
- special package set rules
- hacks from the spec file
Signed-off-by: Ondřej Budai <ondrej@budai.cz>