Add the architecture label to build jobs
which will enable filtering and monitoring
build jobs by architecture. Build job results
contain the `arch` field in the results struct,
this is then used to pass to the metrics, where
there is a value, otherwise it is set to an
empty string.
Remove a duplicate call to the `DequeueJobMetrics`
function in the worker server. This duplicate call
resulted in negative numbers for pending jobs in
the prometheus metrics.
During the diff-manifests.sh test the source repository checkout is
changed to generate manifests from current main branch for comparion. We
want to checkout back to $head after the script is done or in case of
any unexpected exit.
osbuild commit 9956f54 includes a fix for the `containers.storage.conf`
to work with RHEL 8 by trying to include `pytoml` if including `toml`
fails. We need that for the RHEL 8 based container embedding on OSTree
tests.
Instead of using a cached result `GetDefaultAuthFile`, always
do call the function when a new `Client` is created, since at
least `/run/containers` can get created as a side-effect by
one of the container. Now that we check eagerly and often the
path check function was reworked to only return paths that do
exist and are accessible.
Also check if `REGISTRY_AUTH_FILE` is set and if so, and it
is accessible use that.
To check accessability, use `unix.Access` instead of `os.Stat`,
since On Fedora/RHEL 9 `os.Stat` is implemented via `statx` and
will indeed return `EACCES` for inaccessible paths. But on RHEL
8 `lstat` is used and that will return `ENOENT` but then later
when trying to open the file we will get `EPERM`.
Add support for embedding containers in OSTree commits by
storing them in `/usr/share/containers/storage`. The storage
engine is configured accordingly so that this extra location
is automatically taken into account by e.g. `podman`.
Add support for embedding containers in OSTree commits by
storing them in `/usr/share/containers/storage`. The storage
engine is configured accordingly so that this extra location
is automatically taken into account by e.g. `podman`.
We never want an empty path but always force a specific auth
file location, even if the location does not actually exist,
due to the peculiarities mentioned in the comment of the
`container.GetDefaultAuthFile` function.
Fail the test if manifest generation fails on the PR HEAD, but don't
fail if the generation on main fails.
This can happen if something breaks in main (the generator, a
repository, an image definition, etc) and the PR is meant to fix it.
Update the progress line only when another line was received, which in
this case means a job has started or finished.
No need to keep reprinting the progress.
The main reason is that there should be only one place where the
container resolution is happening, which is the worker, so that
we only have one central place to configure aspects of it, like
container credentials.
Implement all of Fedora in terms of this new abstraction. What used to be the
manifest functions (and before that the pipeline functions) are now the image
functions, whose purpose is to instantiate the right image kind structs from the
image type definitions we currently have in the distro definition.
This abstracts away the manifest instantiation. The idea is that we define one
of these image kind types to represent a group of image types that are
sufficiently similar. Each image kind will have a struct with with all the
properties that can be customised for the image and a function to turn that into
an actual manifest. This is similar to how distro/fedora/manifest.go and
cmd/osbuild-playground works today, and aspires to move these closer together
and to eventually make the distro definitions simpler.
For now cmd/osbuild-playground is moved over to using the new abstraction.
For now this encapsulates osbuild export and filename in that
exported tree. In the future we could add MIME type.
For now this is a concrete type, but should probably be an
interface, so the consumer of artefacts know they are the right
type. Enforcing we only push AMIs to EC2, etc.
Similarly to how checkpoints work, each pipeline can be marked for
export, and the manifest can return all the names of the exported
pipelines, to be passed to osbuild. Additionally, the Export
function returns an artefact object, which can be used to know how
to access the exports once osbuild is done. For now, this is unused.
If Checkpoint() is called on a pipeline, it is marked for
checkpointing. Calling GetCheckpoints() returns the names of all
its pipelines that are marked for checkpointing as a slice of
strings. This can be passed to osbuild by the caller, in which case
the trees produced by each of these pipelines will be checkpointed
to speed up future builds.
Before this can be used in production we need a mechanism for
automatically cleaning up the cache.
This is meant to encapsulate the tweaks we do to the OS tree
orthogonally to anything else. For now it still contains some
configuration that only sometimes applies, but this should
continue being reworked until all the fields in this struct
always apply to any artefact that is using it.
At the same time, stop instantiating with default values, as the
empty values should work. This is not a functional change as the
caller always sets these now.
Test the functionality only on RHEL-8.6, since this is the version that
Brew workers use. Test only RHUI images, because these will be the ones
to be used with this functionality.
The AWS and Azure RHUI images are produced as compressed archives, which
can be uploaded to Koji, but they can't be uploaded to the cloud
provider in this format. To support cloud upload for these types of
images, we need to decompress them before the upload.
Add a workaround for AWS and AzureImage targets to check if the image
has `.xz` suffix and if yes, decompress it before uploading to cloud.
This workaround is needed until image definitions will support and use
multiple exports per image to allow using different export per upload
target.
Add support to handle upload options in image requests for Koji
composes. The image is always uploaded to Koji, but now it can be
uploaded to the cloud environment in addition to Koji as part of the
build.
The image name used for Koji image can't be used as is for uploading to
the cloud, because each cloud provider has its own requirements for the
valid characters. For now, let the Cloud API implementation generate a
random image name. The name is always returned in the compose status's
upload status, so it should be possible to attach it to the Koji build
to allow users to find the image.
Enhance the `koji-finalize` job implementation to be able to cope with
multiple upload targets being specified for an `OSBuildJob`.
Implement a convenience method `OSBuildJobResult.TargetResultsByName()`
for filtering the target results attached to the job result by their
name. Cover the method with an unit test. And lastly use this method in
the `koji-finalize` job to find the appropriate Koji upload target
results.
This is a preparation for enabling cloud uploads for Koji composes.
Enhance the `koji-finalize` job implementation to use deferred function
to ensure that the job status is always reported back to the composer.
In addition, if the `JobError` is set, also fail the Koji job.
Previously, composer and Koji were not updated in some corner cases when
the job would fail.