The newest weldr-client (35.6) changed its responses to always be
arrays/lists of json objects. The tests have been updated to parse this
structure as well.
The current workflow for parsing responses from the weldr-client is as
follows:
- If weldr-client is installed on the system:
- Try to parse the newest structure version: array of objects, each
with a body field.
- If that fails, initialise the first element of the array and parse
the response into it.
- If the weldr-client is not installed, initialise the array with one
element and parse the response into the body field of the first
element of the array.
For each of the supported distros start a goroutine to depsolve
'filesystem' which will preload the metadata making subsequent responses
faster.
This is safe to do without limits because we only supposed a limited
number of distros, and without additional locking because this is the
the same as hitting the API with multiple depsolve requests at the same
time.
When the user specifies any of the distro, arch, or image type values to
filter generation, invalid combinations would cause a panic, which made
it hard to filter requests based just on an image type.
Instead of failing, print an error message to inform the user, but
continue with the rest of the jobs.
This way, a user is informed that a certain combination is invalid if
they make a mistake, but can also filter on a single image type and only
get valid manifests out of the run.
When running an osbuild job, we read `/etc/redhat-release` to get the
host OS name to attach as metadata to the job result.
Only Fedora and RHEL ship this file, which makes the osbuild job always
fail on other distributions.
The main reason to report host OS back to the worker server is due to
Koji composes and the koji-finalize job, which pushes it to Koji. The
motivation is to have enough information to potentially re-instantiate
/ identify the original builder host OS. There are no specific
requirements on the string.
Modify the code to use `/etc/os-release` to determine the host OS. Fall
back to using `linux` as the host OS, in case reading `os-release`
fails, log the error and continue with the job. The `linux` fallback is
suggested by the `os-release` spec [1]
[1] https://www.freedesktop.org/software/systemd/man/os-release.html#ID=
Co-authored-by: Achilleas Koutsou <achilleas@koutsou.net>
The search response from mocks/dnfjson is a map of responses indexed by the
comma-separated list of packages and globs being requested. Add support
for this.
Extend the implementation of mock openid server to take the `grant_type`
into consideration for the `/token` endpoint.
In addition to the previously supported `refresh_topen`, the
implementation now supports also `client_credentials`.
This is necessary to make it possible to use the mock server in
the `koji-osbuild` CI, because the builder plugin uses
`client_credentials` to get access token.
The implementation behaves in the following way:
- For `refresh_token` grant type, it takes the `refresh_token` value
from the request and adds it to the `rh-org-id` field in the custom
claim, which is part of the returned token.
- For `client_credentials` grant type, it takes the `client_secret`
value from the request and adds it to the `rh-org-id` field in the
custom claim, which is part of the returned token.
Requests without the supported `grant_type` set are rejected.
Modify affected test cases to specify `grant_type` when fetching a new
access token.
The ellipsis operator was used as a hack to not need to pass any details
as an argument, but it makes what the end object will actually look like
less obvious. It also makes it impossible to pass an array to details
without getting a nested array.
Fixes#2874
Add a new cloud API test that will build an edge-container,
upload it to the gitlab CI registry, fetch it from there,
run it and compare that the OSTree commit contained in it
is indeed the one we expect.
Co-Developed-By: Christian Kellner <christian@kellner.me>
If a `AuthFilePath` was configured, which should contain secrets
to access container registries, we set this on the `Client` so
that the secrets can be used during registry access.
Worker
------
Add configuration for the default container registry.
Use the default container registry if not provided as part
of the image name.
When using the default registry use the configured values
Return the image url as part of the result.
Composer Worker API
-------------------
Add `ContainerTargetResultOptions` to return the image url
Composer API
------------
Add UploadOptions to allow setting of the image name and tag
Add UploadStatus to return the url of the uploaded image
Co-Developed-By: Christian Kellner <christian@kellner.me>
Update the progress line only when another line was received, which in
this case means a job has started or finished.
No need to keep reprinting the progress.
This abstracts away the manifest instantiation. The idea is that we define one
of these image kind types to represent a group of image types that are
sufficiently similar. Each image kind will have a struct with with all the
properties that can be customised for the image and a function to turn that into
an actual manifest. This is similar to how distro/fedora/manifest.go and
cmd/osbuild-playground works today, and aspires to move these closer together
and to eventually make the distro definitions simpler.
For now cmd/osbuild-playground is moved over to using the new abstraction.
This is meant to encapsulate the tweaks we do to the OS tree
orthogonally to anything else. For now it still contains some
configuration that only sometimes applies, but this should
continue being reworked until all the fields in this struct
always apply to any artefact that is using it.
At the same time, stop instantiating with default values, as the
empty values should work. This is not a functional change as the
caller always sets these now.
The AWS and Azure RHUI images are produced as compressed archives, which
can be uploaded to Koji, but they can't be uploaded to the cloud
provider in this format. To support cloud upload for these types of
images, we need to decompress them before the upload.
Add a workaround for AWS and AzureImage targets to check if the image
has `.xz` suffix and if yes, decompress it before uploading to cloud.
This workaround is needed until image definitions will support and use
multiple exports per image to allow using different export per upload
target.
Enhance the `koji-finalize` job implementation to be able to cope with
multiple upload targets being specified for an `OSBuildJob`.
Implement a convenience method `OSBuildJobResult.TargetResultsByName()`
for filtering the target results attached to the job result by their
name. Cover the method with an unit test. And lastly use this method in
the `koji-finalize` job to find the appropriate Koji upload target
results.
This is a preparation for enabling cloud uploads for Koji composes.
Enhance the `koji-finalize` job implementation to use deferred function
to ensure that the job status is always reported back to the composer.
In addition, if the `JobError` is set, also fail the Koji job.
Previously, composer and Koji were not updated in some corner cases when
the job would fail.
Add a new `containers` section that can be used to request the
embedding of containers into images. The only requirement is
the source property to specify where to fetch the container from.
This suppports specifying the digest of the container or the tag.
In case none is given it defaults to the `latest` tag. The `Name`
field can be used to optionally specify a name to use inside the
image.
NB: currently no tools or apis support container resolution yet.
This follows in the next commits.
This is the first step to support embedding container images. Here
we add the `containers []container.Spec` argument to supply images
with resolved container specifications. For now all distros will
return an error in case a container is actually supplied since none
of them currently support embedding containers. NB: also no apis or
tools will actually resolve containers.
Instead of keeping an extra field in `Client`, we just use the
existing `sysCtx.DockerAuthConfig` structure. When the context
is later copied during the upload operation the credentials
will be copied as well. It also saves us from syncing the
credentials if we directly use said `sysCtx` for operations.
Instead of having an extra field, `TlsVerify`, on the `Client` and
then later setting the corresponding `SystemContext` options, use
the existing `SystemContext` field of `Client`. The corresponding
field is a tri-state: unset, true, false, which is represented as
a pointer to boolean in the `Client`'s new getter and setter. This
also inverts the boolean logic from verify TLS to skip TLS which
aligns very well with the corresponding fields in the upload target
struct.
In addition we properly capitalize some existing variables.
Koji API removed by the previous commit was the last user of osbuild-koji job.
Let's remove it since nothing uses it. This also removes all of the
compatibility code in Cloud API, see concerns below:
Compatibility concerns:
- the internal deployment was moved to a completely different composer
instance, thus there are no old jobs
- Fedora deployment is still unused in prod, thus we don't care about keeping
backward compatibility of the old jobs
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We no longer use it, let's remove it. If you are wondering what to use instead,
use Cloud API. It supports everything that Koji API supported and more.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This introduces an expiry date (default: 14 days from insert date) and
adjust the service-maintenance script to delete jobs that are older than
the expiration date.
We have three kinds of operating system trees, until we unify them to one,
hide them behind one interface. Use this to read the architecture from the
Tree rather than pass it in as a string to parent pipelines.
Also, make the filename parameter optional in a few places, there should be no
reason to set this rather than introspect it (except for backwards
compatibility).
Lastly, add another playground example sample to build a raw image.
For now all it does is represent the name of the runner and what requirements
it has of the build pipeline.
Move some package definitions from the runner package set to where it belongs.