Since the `jobStatus` functions return a `JobInfo`
struct that contains the `JobStatus`, it makes sense
to rename the function names for the sake of consistency.
The number of return values from the `jobStatus`
function was growing and getting out of hand. Not
all return values were being used in all cases
and so returning a single struct with the information
and status of a job makes more sense. Then in each case
the resulting fields can be used as needed.
The main reason is that there should be only one place where the
container resolution is happening, which is the worker, so that
we only have one central place to configure aspects of it, like
container credentials.
This is the first step to support embedding container images. Here
we add the `containers []container.Spec` argument to supply images
with resolved container specifications. For now all distros will
return an error in case a container is actually supplied since none
of them currently support embedding containers. NB: also no apis or
tools will actually resolve containers.
Weldr makes assumptions about the names of the package sets. This
does not work in all cases, so should be reworked, but for now just
do enough that we don't regress.
The package sets for an image can depend on the blueprint, and
by the same logic there is no reason it should not be able to
depend on the image options.
This is so far a non-functional change, but makes a follow-up
commit simpler (though still without actually depending on
the image options to compute the package sets).
The osbuild export is specific to the upload target and different
targets may require using a different export. While osbuild-composer
still does not support multiple exports for osbuild jobs, this prepares
the ground for such support in the future.
The backward compatibility with older implementations of the composer
and workers is kept on the JSON (Un)mashaling level, where the JSON
message is always a super-set of the old and new way of providing the
exports to osbuild job.
Stop relying on the server interpreting the set `ImageName` in the
`OSBuildJob` as a signal to upload the image back to the worker server
and add an explicit "Worker Server" upload target to the job.
The `distro` package is now used for distro definitions supported by
osbuild-composer, not for introspecting the Host system. Move
`GetRedHatRelease()` and `GetHostDistroName()` functions to the `common`
package.
Added CleanCache() method to the solver that deletes all the caches if
the total size grows above a certain (configurable) limit
(default: 500 MiB).
The function is called externally to handle errors (usually log or
ignore completely) and to avoid calling multiple times for multiple
depsolves of a single request.
The cleanup is extremely simple and is meant as a placeholder for more
sophisticated cache management. The goal is to simply avoid ballooning
cache sizes that might cause issues for users or our own services.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.
Move package set chain collation to the distro package and add
repositories to the package sets while returning the package sets from
their source, i.e., the ImageType.PackageSets() method.
This also removes the concept of "base repositories". There are no
longer repositories that are added implicitly to all package sets but
instead each package set needs to specify *all* the repositories it will
be depsolved against.
This paves the way for the requirement we have for building RHEL 7
images with a RHEL 8 build root. The build root package set has to be
depsolved against RHEL 8 repositories without any "base repos" included.
This is now possible since package sets and repositories are explicitly
associated from the start and there is no implicit global repository
set.
The change requires adding a list of PackageSet names to the core
rpmmd.RepoConfig. In the cloud API, repositories that are limited to
specific package sets already contain the correct package set names and
these are now copied to the internal RepoConfig when converting types in
genRepoConfig().
The user-specified repositories are only associated with the payload
package sets like before.
Attach the repository configurations that are specific to a package set
directly on the PackageSet object. This simplifies the Depsolve()
signature and avoids requiring a `nil` when no additional repositories
are required. More importantly, it makes associating repositories to
package sets explicit, no longer relying on matching array indices or
map keys.
Remove the single Depsolve function from the dnfjson package and the
depsolve command from the dnf-json tool. The new ChainDepsolve
functions and chain-depsolve command can handle single depsolves in the
same way so there's no need to keep (and have to maintain) two versions
of very similar code.
The ChainDepsolve function (in Go) and chain-depsolve command (in
Python) have been renamed to plain Depsolve and depsolve respectively,
since they are now general purpose depsolve functions.
All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
The VMDK image is already produced as stream-optimized. Therefore stop
setting the `StreamOptimized` option in `OSBuildJob` structure by both,
Weldr and Cloud APIs.
Keep the handling of the option in worker for backward compatibility,
in case an older instance of Composer server is used, which does not
produce VMDK manifests as stream-optimized. In such case, the worker
needs to convert the image.
Use `rpmmd.DepsolvePackageSets()` in Weldr API compose request handler,
instead of `rpmmd.Depsolve()`.
Extract common code from `API.allRepositories()` and
`API.allRepositoriesByImageType()` to a new method
`API.payloadRepositories()`.
Modify `API.allRepositoriesByImageType()` to return payload repositories
(repositories defined by user) as a separate slice to enable the use of
`rpmmd.DepsolvePackageSets()`, which requires the package-set-specific
repositories to be passed separately.
Keep using `rpmmd.Depsolve()` in Weldr where appropriate. The
implementation depsolves various simple package sets for multiple API
request handlers and it does not make sense to complicate the code by
moving to `rpmmd.DepsolvePackageSets()`.
Call `Shutdown()` on all http servers. This means we will finish processing
any pending requests (including depsolving), but we will not listen to new
ones.
In particular, we will not answer to the readiness probe, so no new traffic
will be routed to this container.
Once all pending requests have been handled composer will shut down
gracefully and the liveness probe will return failure.
Note that in order for this to work correctly no requests should ever take longer
than the shutdown timeout (by default 30s).
The `API.arch` member was (mostly) used to read the name of the
architecture.
The only non-name use was for the purposes of reading RPM repositories
from the configuration, in `reporegistry.ReposByArch()`, a thin wrapper
around `reporegistry.ReposByArchName()`.
Removing the `arch` member from the API and using the new `archName`
that is set up in the API constructor lets us control the arch name that
is set without relying on a valid `distro.Arch` object being available
(which would depend on having a valid `distro.Distro` object).
Replaced all calls to `ReposByArch()` with `ReposByArchName()` which
depends on the arch and distro name strings instead of a full
`distro.Arch`.
When the host distribution is not known or supported, instead of failing
with an error, print a warning to the log and initialise the API with
the architecture name and distro name.
This enables running the weldr API on unsupported distros for
cross-distro building.
Guards against a nil arch member when initialising the store.
Channels are a concept similar to job types. Callers must specify a channel
name when queueing a new job. A list of channels is also specified when
dequeueing a job. The dequeued job's channel will always be from one of the
specified channel. Of course, the job types are also respected. The dequeued
job will also always be from one of the specified type.
Currently, all calls to jobqueue were changed so all queue operations use
an empty channel name and all dequeue operations use a list containing
an empty channel.
Thus, this is a non-functional change.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
No need to pass the entire image type. We just need the default ref.
This removes the distro package dependency from the ostree package,
which we will need so distro can use the ostree types and functions.
Fix the cancel API to allow a waiting compose to be canceled.
This also fixes the cancel return code to be 400, the lorax-composer
behavior was a bug, and using 400 allows composer-cli to properly
display the error.
When the server is restarted the blueprint changes, which are only held
in memory, are lost. This checks for missing changes and returns an
error.
The test is also adjusted for the new error.
Related: rhbz#1922845
Replace Job() and JobStatus() with typesafe versions, and introduce JobType()
for the rare instances where we don't know the type up front.
Additionally, catch a few more error cases:
- if OSBuildResult is nil, then we failed to invoke osbuild
- make sure the same JobResult handling is done for osbuild-koji, as for osbuild
Pipeline names are added to each job before adding to the queue. When a
job is finished, the names are copied to the Result object as well. This
is done for both OSBuild and Koji jobs.
The pipeline names in the result are primarily used to separate package
lists into build and payload/image packages in two cases:
1. Koji builds: for reporting the build root and image package lists to
Koji (in Koji finalize).
2. Cloud API (v1 and v2): for reporting the payload packages in the
metadata request.
The pipeline names are also used to print the system log output in the
order in which pipelines are executed. This still isn't used when
printing the OSBuild Result (osbuild2.Result.Write()) and we still rely
on sorting by pipeline name
(see https://github.com/osbuild/osbuild-composer/pull/1330).
Reading stage metadata using osbuild's v2 result format.
For RPM stages we only want the core (OS) RPMs (not the build root
RPMs). Skip the build pipeline by name, but this should be handled
better since names are arbitrary.
Using type switch to convert metadata types instead of relying on the
type string of the stage result.
The rpmmd helper function isn't used anymore since that requires two
conversion passes (osbuild.StageMetadata -> rpmmd.RPM ->
cloudapi.PackageMetadata).
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
The variables are set to the git revision from which the build is
triggered and rpm version from the spec file, if it is build using RPM.
This can be later used to query exact source version while
running osbuild-composer.
It is necessary to use both, because none of them is available in all
possible scenarios.
Use either git-rev (preferably) or RPM version (NEVRA) instead of the
"devel" build type. It was just a placeholder.
The problem: osbuild-composer used to have a rather uncomplete logic for
selecting client certificates and keys while fetching data from
repositories that use the "subscription model". In this scenario, every
repo requires the user to use a client-side TLS certificate. The problem
is that every repo can use its own CA and require a different pair of
a certificate and a key. This case wasn't handled at all in composer.
Furthermore, osbuild-composer can use remote workers which complicates
things even more.
Assumptions: The problem outlined above is hard to solve in the general
case, but Red Hat Subscription Manager places certain limitations on how
subscriptions might be used. For example, a subscription must be tight to
a host system, so there is no way to use such a repository in osbuild-composer
without it being available on the host system as well.
Also, if a user wishes to use a certain repository in osbuild-composer it
must be available on both hosts: the composer and the worker. It will come
with different pair of a client certificate and a key but otherwise, its
configuration remains the same.
The solution: Expect all the subscriptions to be registered in the
/etc/yum.repos.d/redhat.repo file. Read the mapping of URLs to certificates
and keys from there and use it. Don't change the manifest format and let
osbuild guess the appropriate subscription to use.
Allow globing patterns in distro-specific image type deny list of Weldr
API configuration. Extend unit tests to verify simple globing patterns.
Update NEWS entry.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Refactor the `composeHandler()` method to send the actual error
returned by `getImageType()` as an API response.
Modify tests to handle the changed error message in API calls.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Rename the `checkImageTypeDenylist()` method to `isImageTypeAllowed()`
and return boolean value instead of error.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Change the Image Type denylist in Weldr API from being applied to all
distributions to being distribution-specific. A special name `*`
can be used in the configuration to match any distribution
or any image type.
Modify NEWS entry and unit tests to reflect this change.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend Weldr API to accept a list of denied image types, which should
not be exposed via API for any supported distribution. This
functionality will be needed to not expose image types which can't be
successfully built outside of Red Hat VPN. Example of such images are
the official RHEL EC2 images, which include RHUI client packages not
available publicly.
Image Types are filters when listing available compose types and
creating a new compose using Weldr API.
Extend osbuild-composer configuration to allow specifying the list of
denied Image Types for Weldr API.
Add unit tests for implemented changes.
Add NEWS entry describing the newly introduced functionality.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Some distributions do not have repositories and therefore cannot be
built. This filters the list of supported distributions by checking for
repos when starting up. All other requests use the api.distros list or
api.getDistro() function.
The name of the distro you get from distros.FromHost() may not match any of
the names in the registry's list. Use the actual name of the distro
instead of the mangled name.
Also removes api.distro which is unused.