Add a new generic container registry client via a new `container`
package. Use this to create a command line utility as well as a
new upload target for container registries.
The code uses the github.com/containers/* project and packages to
interact with container registires that is also used by skopeo,
podman et al. One if the dependencies is `proglottis/gpgme` that
is using cgo to bind libgpgme, so we have to add the corresponding
devel package to the BuildRequires as well as installing it on CI.
Checks will follow later via an integration test.
Define supported job type names as constants and use them in all places,
instead of string literals.
There are multiple benefits of this approach. Using constants removed
the room for typos in the string literals. One can use autocompletion in
IDE for job types. Using constant makes it easier to find all references
where it is used and thus all places that are handling a specific job
type.
The `distro` package is now used for distro definitions supported by
osbuild-composer, not for introspecting the Host system. Move
`GetRedHatRelease()` and `GetHostDistroName()` functions to the `common`
package.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
Added CleanCache() method to the solver that deletes all the caches if
the total size grows above a certain (configurable) limit
(default: 500 MiB).
The function is called externally to handle errors (usually log or
ignore completely) and to avoid calling multiple times for multiple
depsolves of a single request.
The cleanup is extremely simple and is meant as a placeholder for more
sophisticated cache management. The goal is to simply avoid ballooning
cache sizes that might cause issues for users or our own services.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.
Move package set chain collation to the distro package and add
repositories to the package sets while returning the package sets from
their source, i.e., the ImageType.PackageSets() method.
This also removes the concept of "base repositories". There are no
longer repositories that are added implicitly to all package sets but
instead each package set needs to specify *all* the repositories it will
be depsolved against.
This paves the way for the requirement we have for building RHEL 7
images with a RHEL 8 build root. The build root package set has to be
depsolved against RHEL 8 repositories without any "base repos" included.
This is now possible since package sets and repositories are explicitly
associated from the start and there is no implicit global repository
set.
The change requires adding a list of PackageSet names to the core
rpmmd.RepoConfig. In the cloud API, repositories that are limited to
specific package sets already contain the correct package set names and
these are now copied to the internal RepoConfig when converting types in
genRepoConfig().
The user-specified repositories are only associated with the payload
package sets like before.
Attach the repository configurations that are specific to a package set
directly on the PackageSet object. This simplifies the Depsolve()
signature and avoids requiring a `nil` when no additional repositories
are required. More importantly, it makes associating repositories to
package sets explicit, no longer relying on matching array indices or
map keys.
Remove the single Depsolve function from the dnfjson package and the
depsolve command from the dnf-json tool. The new ChainDepsolve
functions and chain-depsolve command can handle single depsolves in the
same way so there's no need to keep (and have to maintain) two versions
of very similar code.
The ChainDepsolve function (in Go) and chain-depsolve command (in
Python) have been renamed to plain Depsolve and depsolve respectively,
since they are now general purpose depsolve functions.
All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
Keeping the expected responses in a separate file and formatted makes
them easier to read, write, and update.
This commit doesn't move all the responses. It focuses on the ones that
are the hardest to work with (the ones that are thousands of characters
long).
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
The VMDK image is already produced as stream-optimized. Therefore stop
setting the `StreamOptimized` option in `OSBuildJob` structure by both,
Weldr and Cloud APIs.
Keep the handling of the option in worker for backward compatibility,
in case an older instance of Composer server is used, which does not
produce VMDK manifests as stream-optimized. In such case, the worker
needs to convert the image.
Use `rpmmd.DepsolvePackageSets()` in Weldr API compose request handler,
instead of `rpmmd.Depsolve()`.
Extract common code from `API.allRepositories()` and
`API.allRepositoriesByImageType()` to a new method
`API.payloadRepositories()`.
Modify `API.allRepositoriesByImageType()` to return payload repositories
(repositories defined by user) as a separate slice to enable the use of
`rpmmd.DepsolvePackageSets()`, which requires the package-set-specific
repositories to be passed separately.
Keep using `rpmmd.Depsolve()` in Weldr where appropriate. The
implementation depsolves various simple package sets for multiple API
request handlers and it does not make sense to complicate the code by
moving to `rpmmd.DepsolvePackageSets()`.
Return dummy values from the following methods:
- PackageSets
- PayloadPackageSets
- PackageSetsChains
Use package set names commonly used by recent distro definitions.
Package sets are based on values used by rpmmd mock implementation.
Adjust two Weldr API unit test check for the dummy values. Without
this fix, these unit tests would start failing after the move to
`rpmmd.DepsolvePackageSets()` in Weldr API compose handler.
Add support for importing the GCE image into GCP using Weldr API. The
credentials to be used can be specified in the upload settings and will
be then used by the worker to authenticate with GCP.
The GCP target credentials are passed to Weldr API as base64 encoded
content of the GCP credentials JSON file. The reason is that the JSON
file contains many values and its format could change in the future.
This way, the Weldr API does not rely on the credentials file content
format in any way.
Add a new test case for the GCP upload via Weldr and run it in CI.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
jobimpl-osbuild
---------------
Add GenericS3Creds to struct
Add method to create AWS with Endpoint for Generic S3 (with its own credentials file)
Move uploading to S3 and result handling to a separate method (along with the special VMDK handling)
adjust the AWS S3 case to the new method
Implement a new case for uploading to a generic S3 service
awscloud
--------
Add wrapper methods for endpoint support
Set the endpoint to the AWS session
Set s3ForcePathStyle to true if endpoint was set
Target
------
Define a new target type for the GenericS3Target and Options
Handle unmarshaling of the target options and result for the Generic S3
Weldr
-----
Add support for only uploading to AWS S3
Define new structures for AWS S3 and Generic S3 (based on AWS S3)
Handle unmarshaling of the providers settings' upload settings
main
----
Add a section in the main config for the Generic S3 service for credentials
If provided pass the credentials file name to the osbuild job implementation
Upload Utility
--------------
Add upload-generic-s3 utility
Makefile
------
Do not fail if the bin directory already exists
Tests
-----
Add test cases for both AWS and a generic S3 server
Add a generic s3_test.sh file for both test cases and add it to the tests RPM spec
Adjust the libvirt test case script to support already created images
GitLabCI - Extend the libvirt test case to include the two new tests
The directory created by `T.TempDir` is automatically removed when the
test and all its subtests complete.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Call `Shutdown()` on all http servers. This means we will finish processing
any pending requests (including depsolving), but we will not listen to new
ones.
In particular, we will not answer to the readiness probe, so no new traffic
will be routed to this container.
Once all pending requests have been handled composer will shut down
gracefully and the liveness probe will return failure.
Note that in order for this to work correctly no requests should ever take longer
than the shutdown timeout (by default 30s).
The `API.arch` member was (mostly) used to read the name of the
architecture.
The only non-name use was for the purposes of reading RPM repositories
from the configuration, in `reporegistry.ReposByArch()`, a thin wrapper
around `reporegistry.ReposByArchName()`.
Removing the `arch` member from the API and using the new `archName`
that is set up in the API constructor lets us control the arch name that
is set without relying on a valid `distro.Arch` object being available
(which would depend on having a valid `distro.Distro` object).
Replaced all calls to `ReposByArch()` with `ReposByArchName()` which
depends on the arch and distro name strings instead of a full
`distro.Arch`.
When the host distribution is not known or supported, instead of failing
with an error, print a warning to the log and initialise the API with
the architecture name and distro name.
This enables running the weldr API on unsupported distros for
cross-distro building.
Guards against a nil arch member when initialising the store.
Channels are a concept similar to job types. Callers must specify a channel
name when queueing a new job. A list of channels is also specified when
dequeueing a job. The dequeued job's channel will always be from one of the
specified channel. Of course, the job types are also respected. The dequeued
job will also always be from one of the specified type.
Currently, all calls to jobqueue were changed so all queue operations use
an empty channel name and all dequeue operations use a list containing
an empty channel.
Thus, this is a non-functional change.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
No need to pass the entire image type. We just need the default ref.
This removes the distro package dependency from the ostree package,
which we will need so distro can use the ostree types and functions.
Adding three more combinations that weren't covered by previous tests:
- Supplying ref, parent, and URL: should result in an error
- Supplying ref and parent: OK
- Supplying parent, and URL: same as first case (ref gets default value
from image type)
Added default OSTreeRef() to test image type to cover the cases where
the ref isn't specified but affects the validation.
Separated and commented the test cases.
Fix the cancel API to allow a waiting compose to be canceled.
This also fixes the cancel return code to be 400, the lorax-composer
behavior was a bug, and using 400 allows composer-cli to properly
display the error.
When the server is restarted the blueprint changes, which are only held
in memory, are lost. This checks for missing changes and returns an
error.
The test is also adjusted for the new error.
Related: rhbz#1922845
The blueprint name should never be empty, as it can cause other problems
like with the blueprints list results. Return an error if one is pushed
to the store, either as a blueprint commit or as a blueprint workspace.
Also adjusts the new test for the new error.
Related: rhbz#1922845
There is a problem with blueprint changes, once the server is restarted
the previous changes are all lost because they are not serialized to
disk.
This adds test fixture support so that new tests can be added before
fixing the problem. It adds store.FixtureOldChanges with blueprints
changes and empty blueprints.
Related: rhbz#1922845
Replace Job() and JobStatus() with typesafe versions, and introduce JobType()
for the rare instances where we don't know the type up front.
Additionally, catch a few more error cases:
- if OSBuildResult is nil, then we failed to invoke osbuild
- make sure the same JobResult handling is done for osbuild-koji, as for osbuild
Since the workers will use structured error messages
going forward, it is necessary to maintain backwards
compatability for there errors in composer. Tests have
been added to the various apis to ensure that each api
checks for both kinds of errors, old and new.
Pipeline names are added to each job before adding to the queue. When a
job is finished, the names are copied to the Result object as well. This
is done for both OSBuild and Koji jobs.
The pipeline names in the result are primarily used to separate package
lists into build and payload/image packages in two cases:
1. Koji builds: for reporting the build root and image package lists to
Koji (in Koji finalize).
2. Cloud API (v1 and v2): for reporting the payload packages in the
metadata request.
The pipeline names are also used to print the system log output in the
order in which pipelines are executed. This still isn't used when
printing the OSBuild Result (osbuild2.Result.Write()) and we still rely
on sorting by pipeline name
(see https://github.com/osbuild/osbuild-composer/pull/1330).
Reading stage metadata using osbuild's v2 result format.
For RPM stages we only want the core (OS) RPMs (not the build root
RPMs). Skip the build pipeline by name, but this should be handled
better since names are arbitrary.
Using type switch to convert metadata types instead of relying on the
type string of the stage result.
The rpmmd helper function isn't used anymore since that requires two
conversion passes (osbuild.StageMetadata -> rpmmd.RPM ->
cloudapi.PackageMetadata).
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
The variables are set to the git revision from which the build is
triggered and rpm version from the spec file, if it is build using RPM.
This can be later used to query exact source version while
running osbuild-composer.
It is necessary to use both, because none of them is available in all
possible scenarios.
Use either git-rev (preferably) or RPM version (NEVRA) instead of the
"devel" build type. It was just a placeholder.