Fail the test if manifest generation fails on the PR HEAD, but don't
fail if the generation on main fails.
This can happen if something breaks in main (the generator, a
repository, an image definition, etc) and the PR is meant to fix it.
Update the progress line only when another line was received, which in
this case means a job has started or finished.
No need to keep reprinting the progress.
The main reason is that there should be only one place where the
container resolution is happening, which is the worker, so that
we only have one central place to configure aspects of it, like
container credentials.
Implement all of Fedora in terms of this new abstraction. What used to be the
manifest functions (and before that the pipeline functions) are now the image
functions, whose purpose is to instantiate the right image kind structs from the
image type definitions we currently have in the distro definition.
This abstracts away the manifest instantiation. The idea is that we define one
of these image kind types to represent a group of image types that are
sufficiently similar. Each image kind will have a struct with with all the
properties that can be customised for the image and a function to turn that into
an actual manifest. This is similar to how distro/fedora/manifest.go and
cmd/osbuild-playground works today, and aspires to move these closer together
and to eventually make the distro definitions simpler.
For now cmd/osbuild-playground is moved over to using the new abstraction.
For now this encapsulates osbuild export and filename in that
exported tree. In the future we could add MIME type.
For now this is a concrete type, but should probably be an
interface, so the consumer of artefacts know they are the right
type. Enforcing we only push AMIs to EC2, etc.
Similarly to how checkpoints work, each pipeline can be marked for
export, and the manifest can return all the names of the exported
pipelines, to be passed to osbuild. Additionally, the Export
function returns an artefact object, which can be used to know how
to access the exports once osbuild is done. For now, this is unused.
If Checkpoint() is called on a pipeline, it is marked for
checkpointing. Calling GetCheckpoints() returns the names of all
its pipelines that are marked for checkpointing as a slice of
strings. This can be passed to osbuild by the caller, in which case
the trees produced by each of these pipelines will be checkpointed
to speed up future builds.
Before this can be used in production we need a mechanism for
automatically cleaning up the cache.
This is meant to encapsulate the tweaks we do to the OS tree
orthogonally to anything else. For now it still contains some
configuration that only sometimes applies, but this should
continue being reworked until all the fields in this struct
always apply to any artefact that is using it.
At the same time, stop instantiating with default values, as the
empty values should work. This is not a functional change as the
caller always sets these now.
Test the functionality only on RHEL-8.6, since this is the version that
Brew workers use. Test only RHUI images, because these will be the ones
to be used with this functionality.
The AWS and Azure RHUI images are produced as compressed archives, which
can be uploaded to Koji, but they can't be uploaded to the cloud
provider in this format. To support cloud upload for these types of
images, we need to decompress them before the upload.
Add a workaround for AWS and AzureImage targets to check if the image
has `.xz` suffix and if yes, decompress it before uploading to cloud.
This workaround is needed until image definitions will support and use
multiple exports per image to allow using different export per upload
target.
Add support to handle upload options in image requests for Koji
composes. The image is always uploaded to Koji, but now it can be
uploaded to the cloud environment in addition to Koji as part of the
build.
The image name used for Koji image can't be used as is for uploading to
the cloud, because each cloud provider has its own requirements for the
valid characters. For now, let the Cloud API implementation generate a
random image name. The name is always returned in the compose status's
upload status, so it should be possible to attach it to the Koji build
to allow users to find the image.
Enhance the `koji-finalize` job implementation to be able to cope with
multiple upload targets being specified for an `OSBuildJob`.
Implement a convenience method `OSBuildJobResult.TargetResultsByName()`
for filtering the target results attached to the job result by their
name. Cover the method with an unit test. And lastly use this method in
the `koji-finalize` job to find the appropriate Koji upload target
results.
This is a preparation for enabling cloud uploads for Koji composes.
Enhance the `koji-finalize` job implementation to use deferred function
to ensure that the job status is always reported back to the composer.
In addition, if the `JobError` is set, also fail the Koji job.
Previously, composer and Koji were not updated in some corner cases when
the job would fail.
Extend the `koji.sh` test case to allow also testing the upload to
cloud, in addition to the testing that it supports currently (building
of multiple images in one Koji compose request).
The script now reuses some common functions used by the `api.sh` test
case. Once the Koji compose succeeds, the script verifies that the image
is present in the appropriate cloud environment using a CLI tool. No
additional testing of the image is done, it is not booted.
Extend the `tools/koji-compose.py.sh` script to allow also testing the
upload to cloud, in addition to the testing that it supports currently.
If only the `DISTRO` and `ARCH` arguments are passed to the script, it
submits a new Koji compose with two image requests, as it always did.
If a `CLOUD_TARGET` and `IMAGE_TYPE` arguments are provided in addition
to `DISTRO` and `ARCH`, then the script submits a new Koji compose with
a single image request, which has the upload options set to make the
image be uploaded to cloud.
Supported cloud targets are:
- `aws`
- `azure`
- `gcp`
The image types are those that are accepted by the Cloud API. The script
does not check at all if the provided combination of the cloud target
and image type is valid and submits anything that it gets to composer.
Modify the `tools/koji-compose.py` script to print all log messages to
STDERR and to print only the Koji compose ID to STDOUT. This way, the
caller of the script can easily get the ID of the compose created by the
script and use it later.
Now that we have enabled container embedding on RHEL 8, let's
also test it there.
We also pin it for Fedora and RHEL/CS 9 to be able to use the
new `org.osbuild.containers.storage.conf` stage.
Support for adding containers in non-ostree images. The reason we
don't support OSTree artefacts just yet is that the default storage
location for container is `/var/lib/containers/storage`. But for
OSTree images all content in `/var` is discarded, since that is
deployment specific data. We therefore need to store the containers
somewhere else, e.g. `/usr/share/containers/storage`, but then also
need to configure the system to find containers in that location.
osbuild only recently gained the corresponding stage to do so and
thus this will be done in a follow up.
Add a new test case that embeds an existing container store in our
gitlab ci registry into a qcow2 image. It uses `image-info` to
verify that the container, with the expected id, is indeed embedded
in the resulting image.
Add support for reporting the install container images in an image.
NB: this does not use `podman` but reads the overlay storage
directly and therefore does currently not take additional image
locations or different storage drivers into account. For now this
is not a problem since we don't support any of that.
Support for adding containers in non-ostree images. The reason we
don't support OSTree artefacts just yet is that the default storage
location for container is `/var/lib/containers/storage`. But for
OSTree images all content in `/var` is discarded, since that is
deployment specific data. We therefore need to store the containers
somewhere else, e.g. `/usr/share/containers/storage`, but then also
need to configure the system to find containers in that location.
osbuild only recently gained the corresponding stage to do so and
thus this will be done in a follow up.
Add a new `containers` section that can be used to request the
embedding of containers into images. The only requirement is
the source property to specify where to fetch the container from.
This suppports specifying the digest of the container or the tag.
In case none is given it defaults to the `latest` tag. The `Name`
field can be used to optionally specify a name to use inside the
image.
NB: currently no tools or apis support container resolution yet.
This follows in the next commits.