Remove all the internal package that are now in the
github.com/osbuild/images package and vendor it.
A new function in internal/blueprint/ converts from an osbuild-composer
blueprint to an images blueprint. This is necessary for keeping the
blueprint implementation in both packages. In the future, the images
package will change the blueprint (and most likely rename it) and it
will only be part of the osbuild-composer internals and interface. The
Convert() function will be responsible for converting the blueprint into
the new configuration object.
Change the OSTreeResolveSpec to match the ostree SourceSpec by removing
the Parent field.
Change OSTreeResolveResultSpec to match the CommitSpec by adding the
Secrets field. The RHSM field is kept for backwards compatibility with
older workers.
The Resolve() function is now only responsible for resolving a
SourceSpec to a CommitSpec. It only resolves a checksum if a URL is set
and sets the option for the RHSM secrets.
The Parent has been removed from the SourceSpec. The SourceSpec is a
simple reference to a single ostree ref and has no connection with the
ostree options.
Same as with the container SourceSpec, the struct specifies the required
information to resolve an ostree commit from a source (URL, ref, and
optional parent).
Renaming for consistency.
Add an optional `BootMode` field to the AWS target options.
This allows to signal to worker the intended boot mode to use when
registering the AMI in AWS. If not specified, the default behavior is
preserved, specifically that the boot mode will be determined by the
default boot mode of the instance provisioned from the AMI.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
When the AMI is being registered from a snapshot, the caller can
optionally specify the boot mode of the AMI. If no boot mode is
specified, then the default behavior is to use the boot type of the
instance that is launched from the AMI.
The default behavior (no boot type specified) is preserved after this
change.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add the ListDigest to the container Spec struct and all its copies so we
can store list digests when they are available and pass them on to the
appropriate osbuild stages, sources, and inputs.
Copy the value whenever a spec is moved to a different representation.
This causes dnf-json to use separate caches, allowing them to run in
parallel, with one lock per distribution. Multiple depsolves with the
same distribution in the blueprint will continue to be serial.
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:
ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp
All of the above are a simple name change, the function arguments and
results are exactly the same as before.
ioutil.ReadDir -> os.ReadDir
now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.
These were identified by running:
golangci-lint run --build-tags=integration ./...
The default number of threads (16) is OK for general use case. However,
we are being asked by RH IT to lower the number of threads when
uploading the image to Azure using proxy server.
Make the number of threads configurable in the worker configuration and
default to the currently used value if it is not provided.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extend the worker's configuration to allow setting GCP Bucket to use
when uploading images to GCP. The value from the configuration is used
only if not provided in the TargetOptions of the job.
In GCP, the region of the bucket does not limit importing of the image
to a particular region. So it is completely possible to use a single
Bucket to import images to any and all regions.
Return an error in case no bucket name was set in the job nor in the
worker configuration.
Previously, the internal `OSBuildJobImpl` structure defined only
`GCPCreds` member. This is not practical, once there will be more
than one GCP-related variable.
Define a new `GCPConfiguration` structure, move the credentials variable
into it and use it in `OSBuildJobImpl` instead.
There is a desire to make the worker as "dumb" as possible. Therefore it
is not desired to generate the AWS object key names in the worker if it
was not provided in the job.
Modify the worker code to not generate the AWS object key in any case
and instead set an error in case the object key was not provided.
Modify Weldr API implementation to generate the object key, if it was
not provided by the user. This is consistent with Cloud API
implementation.
Flip the logic when deciding if to use the Bucket from the job or worker
configuration. Previously, the Bucket from the worker configuration was
always preferred if it was set, even if it was provided in the job
itself. This made it impossible to override the configuration.
Change the logic to use the Bucket from the worker configuration only if
it was not set in the job.
Report an error if no bucket name was provided with the job and there is
also none specified in the configuration.
Currently errors like clienterror 28 ("at least one target failed") have
all the relevant information in the details, don't omit these details
from the worker logs.
If the object is marked as public, its direct download URL will be returned
instead of the presigned one.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
When running an osbuild job, we read `/etc/redhat-release` to get the
host OS name to attach as metadata to the job result.
Only Fedora and RHEL ship this file, which makes the osbuild job always
fail on other distributions.
The main reason to report host OS back to the worker server is due to
Koji composes and the koji-finalize job, which pushes it to Koji. The
motivation is to have enough information to potentially re-instantiate
/ identify the original builder host OS. There are no specific
requirements on the string.
Modify the code to use `/etc/os-release` to determine the host OS. Fall
back to using `linux` as the host OS, in case reading `os-release`
fails, log the error and continue with the job. The `linux` fallback is
suggested by the `os-release` spec [1]
[1] https://www.freedesktop.org/software/systemd/man/os-release.html#ID=
Co-authored-by: Achilleas Koutsou <achilleas@koutsou.net>
The ellipsis operator was used as a hack to not need to pass any details
as an argument, but it makes what the end object will actually look like
less obvious. It also makes it impossible to pass an array to details
without getting a nested array.
Fixes#2874
Add a new cloud API test that will build an edge-container,
upload it to the gitlab CI registry, fetch it from there,
run it and compare that the OSTree commit contained in it
is indeed the one we expect.
Co-Developed-By: Christian Kellner <christian@kellner.me>
If a `AuthFilePath` was configured, which should contain secrets
to access container registries, we set this on the `Client` so
that the secrets can be used during registry access.
Worker
------
Add configuration for the default container registry.
Use the default container registry if not provided as part
of the image name.
When using the default registry use the configured values
Return the image url as part of the result.
Composer Worker API
-------------------
Add `ContainerTargetResultOptions` to return the image url
Composer API
------------
Add UploadOptions to allow setting of the image name and tag
Add UploadStatus to return the url of the uploaded image
Co-Developed-By: Christian Kellner <christian@kellner.me>
The AWS and Azure RHUI images are produced as compressed archives, which
can be uploaded to Koji, but they can't be uploaded to the cloud
provider in this format. To support cloud upload for these types of
images, we need to decompress them before the upload.
Add a workaround for AWS and AzureImage targets to check if the image
has `.xz` suffix and if yes, decompress it before uploading to cloud.
This workaround is needed until image definitions will support and use
multiple exports per image to allow using different export per upload
target.
Enhance the `koji-finalize` job implementation to be able to cope with
multiple upload targets being specified for an `OSBuildJob`.
Implement a convenience method `OSBuildJobResult.TargetResultsByName()`
for filtering the target results attached to the job result by their
name. Cover the method with an unit test. And lastly use this method in
the `koji-finalize` job to find the appropriate Koji upload target
results.
This is a preparation for enabling cloud uploads for Koji composes.
Enhance the `koji-finalize` job implementation to use deferred function
to ensure that the job status is always reported back to the composer.
In addition, if the `JobError` is set, also fail the Koji job.
Previously, composer and Koji were not updated in some corner cases when
the job would fail.
Instead of keeping an extra field in `Client`, we just use the
existing `sysCtx.DockerAuthConfig` structure. When the context
is later copied during the upload operation the credentials
will be copied as well. It also saves us from syncing the
credentials if we directly use said `sysCtx` for operations.
Instead of having an extra field, `TlsVerify`, on the `Client` and
then later setting the corresponding `SystemContext` options, use
the existing `SystemContext` field of `Client`. The corresponding
field is a tri-state: unset, true, false, which is represented as
a pointer to boolean in the `Client`'s new getter and setter. This
also inverts the boolean logic from verify TLS to skip TLS which
aligns very well with the corresponding fields in the upload target
struct.
In addition we properly capitalize some existing variables.
Koji API removed by the previous commit was the last user of osbuild-koji job.
Let's remove it since nothing uses it. This also removes all of the
compatibility code in Cloud API, see concerns below:
Compatibility concerns:
- the internal deployment was moved to a completely different composer
instance, thus there are no old jobs
- Fedora deployment is still unused in prod, thus we don't care about keeping
backward compatibility of the old jobs
Signed-off-by: Ondřej Budai <ondrej@budai.cz>