Some variable names used in the `OsBuildJob` `Run()` method were not
very self-explanatory, which made the code harder to understand and
navigate. These were `args`, `options`, `t`. Rename them to be more
self-explanatory of their purpose.
A backward compatibility code handling the conversion of VMDK image to
stream-optimized sub-format has been kept in the implementation since
PR#2529 [1] merged on May 4th 2022. Since this change, no API
implementation is submitting jobs, which would hit this conversion code,
because VMDK images are already being produced in the desired
sub-format.
On-premise deployments are expected to use the same composer and worker
versions. There are no composer / worker instances in production, which
are not running the modified code.
Delete the backward compatibility code.
[1] https://github.com/osbuild/osbuild-composer/pull/2529
Modify the `OsBuildJob` implementation to handle multiple upload targets
in a cycle. However, there is still no API implementation, which would be
adding `OsBuildJobs` with multiple targets to the job queue.
The limitations are that only a single osbuild export is supported, and
the same artifact will be used for each target.
At the end of the job, errors from all targets are gathered. In case
there are none, the job succeeds. In case at least one target failed,
the job fails as well. In such a case, a slice of errors from all failed
targets is added to the job error as details.
Add a new worker client error type `ErrorTargetError` representing that
at least one of job targets failed. The actual target errors are added
to the job detail.
Add a new `OSBuildJobResult.TargetErrors()` method for gathering a slice
of target errors contained within an `OSBuildJobResult` instance. Cover
the method with unit test.
Ensure that a target result with a proper error is added to the Job
result, in case the there was any error encountered. This error is not
used at all for now. Keep setting the `JobError` to the same error set
in the target result for now.
This is a step towards job results containing multiple target results
with each or them having potentially an error set as well.
Do not pass the `worker.OSBuildJobResult` to `uploadToS3()`, but instead
return target errors from the function. This will make the error
handling of all upload targets consistent and easier to modify to
support multiple targets.
Add a new generic container registry client via a new `container`
package. Use this to create a command line utility as well as a
new upload target for container registries.
The code uses the github.com/containers/* project and packages to
interact with container registires that is also used by skopeo,
podman et al. One if the dependencies is `proglottis/gpgme` that
is using cgo to bind libgpgme, so we have to add the corresponding
devel package to the BuildRequires as well as installing it on CI.
Checks will follow later via an integration test.
The only functional change is that
base_path = ""
will be now parsed as:
config.BasePath == ""
which wasn't possible before.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The struct is factored out 1:1. The only functional change in this commit is
worker now logging in case of a missing config (which means just loading the
defaults).
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This commit moves the field to the koji struct where it actually belongs.
Also, it renames it to relax_timeout_factor for the sake of consistency.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The TOML library translates the field names 1:1, so now you have to use:
[Composer]
proxy: "abcd"
This is not idiomatic though so let's add the toml tag to make it [composer].
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
THe `rpmmd.RepoConfig` configuration supports setting "package sets"
for each repository, which allows the associate the individual repos
to specific package sets. Add a new `package_set` option to the
repo configuration of the compose request so that this feature can
be used.
When the Koji target support was added to the osbuild job, based on the
osbuild-koji job, the meaning of target option values got messed up.
The side effect of the issue is that when Koji composes are
submitted via Cloud API the resulting image is currently always uploaded
back to the worker server.
`OsBuildKoji` job
-----------------
- `OSBuildKojiJob.ImageName` is set to the filename of the image as
exported by osbuild.
- `OSBuildKojiJob.KojiFilename` is set to the desired filename which
should be used when uploading the image to Koji.
`OsBuild` job + `KojiTargetOptions` before
------------------------------------------
- `OSBuildJob.ImageName` is set to the filename of the image as exported
by osbuild. This is done only by the Cloud API code for Koji composes.
Cloud API does not set this for regular composes and any other target.
The variable is set in common case only by Weldr API code with the
same meaning and it is used by the `OsBuild` job implementation as an
indication that the image should be uploaded back to the worker server.
- `Target.ImageName` is not set at all. Other targets use it for the
desired filename which should be used when uploading the image to the
target environment.
- `KojiTargetOptions.Filename` is set to the desired filename which
should be used when uploading the image to Koji. All other target
types use `Filename` variable in their options for the filename of the
image as exported by osbuild.
`OsBuild` job + `KojiTargetOptions` after
-----------------------------------------
- `OSBuildJob.ImageName` is still set to the filename of the image as
exported by osbuild. This is kept for a backward compatibility of new
composer with older workers.
- `Target.ImageName` is set to the desired filename which should be used
when uploading the image to Koji.
- `KojiTargetOptions.Filename` is set to the filename of the image as
exported by osbuild.
This change is backward incompatible, meaning that old worker won't be
able to handle Koji compose requests submitted via Cloud API using a new
composer and also a new worker won't be able to handle Koji compose
requests submitted by a new composer. This is intentional, because after
discussion with Ondrej Budai, the Cloud API Koji integration is
currently not used anywhere in production.
The return statement was forgotten when the Koji target support was
added. As a result, a Job with a failed Koji upload would be reported
as successful, while at the same time having a `JobError` set.
We want to start tagging page blobs so this commit adds a small tagging method
to our azure library and exposes it in the osbuild-upload-azure helper.
Example:
go run ./cmd/osbuild-upload-azure/ \
-container azure-container \
-image ./sample.vhd \
-storage-access-key KEY \
-storage-account account \
-tag key:value \
-tag hello:world \
-tag bird:toucan
This commit also has to downgrade the azblob library version to 0.13 so the
API for blob tags is the same as the one currently shipped to Fedora.
This is suboptimal but it should unblock us for now.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
It always felt wrong that the method uploaded the blob under a different name
than the one specified in the blob metadata.
This commit moves the responsibility of specifying the right extension to
the callers. azure.EnsureVHDExtension helper was added to simplify this.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Adjust the `koji-finalize` job implementation to be able to handle
results from both the `osbuild` and `osbuild-koji` jobs.
In case of `osbuild` job, the result is of type
`worker.OSBuildJobResult` and the important values are stored in the
Koji upload target options. For now assume that there may be only a
single upload target results.
In case of `osbuild-koji` job, the result is of type
`worker.OSBuildKojiJobResult` and the important values are already part
of the structure. Add "Old" suffix to all functions handling this case.
Ensure that none of the job dependencies failed. This covers the case
when there are more than one job dependencies, which will be the case
for Koji composes.
Previously, the `OSBuild` job assumed that it can have only a single
job dependency, which could be only the `ManifestJobByID`. This won't
work well for the Koji use case, because the Koji OSBuild job has also
dependency on the Koji-init job.
Extend the `worker.OSBuildJob` structure with a new field, which holds
the `ManifestJobByIDResult` index in the job's dynamic arguments slice.
This value is considered in case when there is more than one dependency
of the `OSBuild` job.
Define supported job type names as constants and use them in all places,
instead of string literals.
There are multiple benefits of this approach. Using constants removed
the room for typos in the string literals. One can use autocompletion in
IDE for job types. Using constant makes it easier to find all references
where it is used and thus all places that are handling a specific job
type.
It is generally useful to have this information in the
`OSBuildJobResult`. This information is currently part of the
`OSBuildKojiJobResult`. Instead of moving it to the new
`KojiTargetResultOptions`, lets move it to the `OSBuildJobResult`
structure and set it for all jobs.
The `distro` package is now used for distro definitions supported by
osbuild-composer, not for introspecting the Host system. Move
`GetRedHatRelease()` and `GetHostDistroName()` functions to the `common`
package.
This error is failing to parse correctly on the workers as a
dnfjson.Error. The old rpmmd.DNFError was returned by pointer, however
the internal/dnfjson package returns the Error by value.
Call vacuum analyze after each chunk of updates, and dump vacuum stats
at the beginning and end of the db cleanup.
Nulling results can increase size on disk, but calling vacuum analyze
will free up space within the table (not on disk) and reuse the space
for new inserts and updates.
Instead of deleting records, delete the results from the manifest and
depsolve jobs. This redacts sensitive data which the manifest can
contain, and this conserves space.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
Added CleanCache() method to the solver that deletes all the caches if
the total size grows above a certain (configurable) limit
(default: 500 MiB).
The function is called externally to handle errors (usually log or
ignore completely) and to avoid calling multiple times for multiple
depsolves of a single request.
The cleanup is extremely simple and is meant as a placeholder for more
sophisticated cache management. The goal is to simply avoid ballooning
cache sizes that might cause issues for users or our own services.
The value doesn't represent the worker name, just the top-level cache
directory for a job. It's useful for separating caches and making the
generation faster, but it's not necessary to return from the function.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.