We want to start tagging page blobs so this commit adds a small tagging method
to our azure library and exposes it in the osbuild-upload-azure helper.
Example:
go run ./cmd/osbuild-upload-azure/ \
-container azure-container \
-image ./sample.vhd \
-storage-access-key KEY \
-storage-account account \
-tag key:value \
-tag hello:world \
-tag bird:toucan
This commit also has to downgrade the azblob library version to 0.13 so the
API for blob tags is the same as the one currently shipped to Fedora.
This is suboptimal but it should unblock us for now.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
It always felt wrong that the method uploaded the blob under a different name
than the one specified in the blob metadata.
This commit moves the responsibility of specifying the right extension to
the callers. azure.EnsureVHDExtension helper was added to simplify this.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Use the `EnsureJobChannel()` middleware in all `compose/<id>` endpoints.
Specifically in the:
- status
- metadata
- manifests
- logs
As a result, these endpoints now return `404` in case the server has JWT
enabled and the channel associated with the request does not match the
channel associated with the requested compose (job).
Extend the multi-tenancy unit test to ensure that these endpoints behave
as expected in case of match and mismatch between the request and
compose channels.
Add `EnsureJobChannel()` middleware method, intended for `compose/<id>`
endpoints. Its purpose is to ensure that the tenant channel set in
the request `echo.Context` matches the tenant channel associated with
the compose. In case of mismatch, `404` is returned.
Add `JobChannel()` method to the worker server implementation for
requesting channel associated with the job.
Extract the determination of tenant channel into a helper function.
This will simplify handler and middleware methods, which won't have
to implement the same logic by themselves.
Fix the multi-tenancy unit test to pass the appropriate context when
querying compose statuses, because the server that is being use has JWT
enabled and expects the tenant to be set in it.
Switch to using `osbuild` job type with `koji` upload target for Koji
build jobs, instead of using `osbuild-koji` job type.
Modify unit tests accordingly.
Previously, only a subset from all Koji Compose unit test cases were
run. Remove this limitation and run all defined unit tests, which were
copied from `kojiapi`.
In addition, fix unit tests and relevant cloudapi methods to make unit
tests pass.
Add `TestRouteWithReply()` to `test/helpers.go` to allow getting the
compose ID when submitting a new compose. This is needed to make some
unit tests deterministic.
Do not delete values from `fields` slice in `dropFields()` in
`test/helpers.go`. The behavior was previously not consistent.
If the top-level map contained the value, it was deleted from it, but
the nested maps also contained the value, it was not deleted. On the
other hand, if the top level map didn't contain the value, but nested
maps did contain it, the value was deleted from all nested maps.
Add new `JobDependencyChainErrors()` method for gathering a stack trace
of job errors from the job's dependencies which caused it to fail.
The `JobDependencyChainErrors()` implementation uses job-type specific
`...Status()` methods intentionally, because job-type specific status
methods check the job's result in a slightly different way and set
the result.JobError to a specific value. Due to this reason, it would
not be practical to introduce a generic `JobStatus()` method and get rid
of the `switch` block, because in reality, the new method would have
to implement an equivalent `switch` block as well.
Add unit test covering the method functionality.
Ensure that none of the job dependencies failed. This covers the case
when there are more than one job dependencies, which will be the case
for Koji composes.
Support the composes/<id>/manifests API endpoint for non-koji builds.
The endpoint will have to anyway handle `osbuild` job results once Koji
composes will start using `osbuild` job type for builds.
The endpoint previously contained a bug. If the `osbuild-koji` job had
an empty manifest attached as a static job argument (this is the default
type value), then this empty manifest was added to the endpoint
response. Since Cloud API uses the depsolve and manifest jobs, the
actual manifest was never attached to the job as a static argument. As a
result, the endpoint was always returning an empty manifest for any koji
compose. Fixing this required also adjusting unit tests, which was
relying on the buggy behavior.
Extend the unit test testing a successful compose to test the logs
endpoint.
Support the composes/<id>/logs API endpoint for non-koji builds. The
endpoint will have to anyway handle `osbuild` job results once Koji
composes will start using `osbuild` job type for builds.
Extend the unit test testing a successful compose to test the logs
endpoint.
Previously, the `OSBuild` job assumed that it can have only a single
job dependency, which could be only the `ManifestJobByID`. This won't
work well for the Koji use case, because the Koji OSBuild job has also
dependency on the Koji-init job.
Extend the `worker.OSBuildJob` structure with a new field, which holds
the `ManifestJobByIDResult` index in the job's dynamic arguments slice.
This value is considered in case when there is more than one dependency
of the `OSBuild` job.
Define supported job type names as constants and use them in all places,
instead of string literals.
There are multiple benefits of this approach. Using constants removed
the room for typos in the string literals. One can use autocompletion in
IDE for job types. Using constant makes it easier to find all references
where it is used and thus all places that are handling a specific job
type.
Change the definition of `EnqueueOSBuildAsDependency()` function to
accept a slice of job IDs on which the OSBuild job depends. Previously,
only the manifest job ID was accepted as the only possible dependency.
This change will be needed in order to enqueue OSBuild jobs for Koji,
which depends on two jobs.
It is generally useful to have this information in the
`OSBuildJobResult`. This information is currently part of the
`OSBuildKojiJobResult`. Instead of moving it to the new
`KojiTargetResultOptions`, lets move it to the `OSBuildJobResult`
structure and set it for all jobs.
The `distro` package is now used for distro definitions supported by
osbuild-composer, not for introspecting the Host system. Move
`GetRedHatRelease()` and `GetHostDistroName()` functions to the `common`
package.
Apply a RWMutex lock to a cache directory.
A global map of cache locks is maintained, keyed by the absolute path to
the cache directory, so multiple cache instances can coexist and share
locks if they use the same cache root.
Currently, the lock only prevents multiple concurrent `shrink()`
operations when multiple cache instances share the same root.
- Update timestamps for cache elements whenever a repository is used.
- Call the new `shrink()` function instead of the old `clean()`.
- Remove the old `clean()` function.
If the repoRecency and repoElements somehow become inconsistent (an ID
in repoRecency does not exist in repoElements), ignore and continue.
The repoID will be removed from the repoRecency list at the end as it's
still counted in the nDeleted.
Functions for managing repository cache management based on a max
desirable size for the entire dnf-json cache directory.
While none of the functions are currently used, the workflow should
be as follows:
- Update the timestamp of a repository whenever it's used in a
transaction by calling `touchRepo()` with the repository ID and the
current time.
- Update the internal cache information when desired by calling
`updateInfo()`. This should be called for example after multiple
depsolve transactions are run for a single build request.
- Shrink the cache to below the configured maxSize by calling
`shrink()`.
The most important work happens in `updateInfo()`. It collects all the
information it needs from the on-disk cache directories and organises it
in a way that makes it convenient for the `shrink()` function to run
efficiently. It stores three important pieces of information:
1. repoElements: a map that links a repository ID with all the
information about a repository's cache:
- the top-level elements (files and directories) for the cache
- size of the repository cache (total of all elements)
- most recent mtime from all the elements which, if the
`touchRepo()` call is consistently used, should reflect the most
recent time the repository was used
2. repoRecency: a list of repository IDs sorted by mtime (oldest first)
3. size: the total size of the cache (total of all repository caches)
This way, when `shrink()` is called, the paths associated with the
least-recently-used repositories can be easily deleted by iterating on
repoRecency, obtaining the repository info from the map, deleting every
path in the repoElements array, and subtracting the repository's size
from the total. The `shrink()` function stops when the new size is
below the maxSize (or when all repositories have been deleted).
Move cache handling data and code to a substruct of the BaseSolver.
This is all internal to the dnfjson package.
Paves the way for cache management with a persistent state.
The `amdgpu` module causes issues on certain GPU-enabled instances
on Azure and it must not be loaded by default.
Modules are sorted alphabetically.
Signed-off-by: Major Hayden <major@redhat.com>
Co-Authored-By: Christian Kellner <christian@redhat.com>
Call vacuum analyze after each chunk of updates, and dump vacuum stats
at the beginning and end of the db cleanup.
Nulling results can increase size on disk, but calling vacuum analyze
will free up space within the table (not on disk) and reuse the space
for new inserts and updates.
Include a tenant label for all prometheus metrics. Modify
jobstatus function in the worker accordingly to return channel
so it can be passed to prometheus.
In commit 5c1530e we disabled `skx_edac` and `intel_cstate` but
after further consultation with Prarit Bhargava it was agreed that
for RHEL 9 we should indeed allow them.
Instead of deleting records, delete the results from the manifest and
depsolve jobs. This redacts sensitive data which the manifest can
contain, and this conserves space.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
Added CleanCache() method to the solver that deletes all the caches if
the total size grows above a certain (configurable) limit
(default: 500 MiB).
The function is called externally to handle errors (usually log or
ignore completely) and to avoid calling multiple times for multiple
depsolves of a single request.
The cleanup is extremely simple and is meant as a placeholder for more
sophisticated cache management. The goal is to simply avoid ballooning
cache sizes that might cause issues for users or our own services.
This test was removed because package sets in chains are no longer
visible in the map returned from ImageType.PackageSets().
Bringing back the test now to ensure that:
1. All package set names defined in the keys returned from the
PackageSets() map match the keys returned from the
PackageSetsChains() map.
2. All package sets defined in the package set chains are defined for
the image type. This is tested by the function PackageSets()
function itself, which should never panic.
They were originally added as convenience functions for single-case
calls, but they're not that useful and they have a million function
arguments, which isn't pretty.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.
Test the conversion of the new and old DepsolveJob given the custom
marshaller.
The deserialised old format is not exactly the same as it would have
been before, but it is functionally equivalent, with the added benefit
of supporting depsolve jobs where we don't want base repositories to be
used by all depsolves.
Add custom marshaller for DepsolveJob that serialises the struct into a
format compatible with both the new and old formats. The format on the
wire is a superset of both the new and old format and can be
deserialised into either while retaining all information.
Move package set chain collation to the distro package and add
repositories to the package sets while returning the package sets from
their source, i.e., the ImageType.PackageSets() method.
This also removes the concept of "base repositories". There are no
longer repositories that are added implicitly to all package sets but
instead each package set needs to specify *all* the repositories it will
be depsolved against.
This paves the way for the requirement we have for building RHEL 7
images with a RHEL 8 build root. The build root package set has to be
depsolved against RHEL 8 repositories without any "base repos" included.
This is now possible since package sets and repositories are explicitly
associated from the start and there is no implicit global repository
set.
The change requires adding a list of PackageSet names to the core
rpmmd.RepoConfig. In the cloud API, repositories that are limited to
specific package sets already contain the correct package set names and
these are now copied to the internal RepoConfig when converting types in
genRepoConfig().
The user-specified repositories are only associated with the payload
package sets like before.