In commit 5c1530e we disabled `skx_edac` and `intel_cstate` but
after further consultation with Prarit Bhargava it was agreed that
for RHEL 9 we should indeed allow them.
Instead of deleting records, delete the results from the manifest and
depsolve jobs. This redacts sensitive data which the manifest can
contain, and this conserves space.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
Added CleanCache() method to the solver that deletes all the caches if
the total size grows above a certain (configurable) limit
(default: 500 MiB).
The function is called externally to handle errors (usually log or
ignore completely) and to avoid calling multiple times for multiple
depsolves of a single request.
The cleanup is extremely simple and is meant as a placeholder for more
sophisticated cache management. The goal is to simply avoid ballooning
cache sizes that might cause issues for users or our own services.
The value doesn't represent the worker name, just the top-level cache
directory for a job. It's useful for separating caches and making the
generation faster, but it's not necessary to return from the function.
This test was removed because package sets in chains are no longer
visible in the map returned from ImageType.PackageSets().
Bringing back the test now to ensure that:
1. All package set names defined in the keys returned from the
PackageSets() map match the keys returned from the
PackageSetsChains() map.
2. All package sets defined in the package set chains are defined for
the image type. This is tested by the function PackageSets()
function itself, which should never panic.
They were originally added as convenience functions for single-case
calls, but they're not that useful and they have a million function
arguments, which isn't pretty.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.
- Standalone executable for generating all test manifests in parallel.
- Command line flags:
- Output directory (-output)
- Number of concurrent workers (-workers)
- Collects list of image types from the distro list and reads:
- tools/test-case-generators/repos.json for repositories
- tools/test-case-generators/format-request-map.json for
customizations
- Prints progress (finished/total)
- Collects errors and failures and prints them after all jobs are
finished
Test the conversion of the new and old DepsolveJob given the custom
marshaller.
The deserialised old format is not exactly the same as it would have
been before, but it is functionally equivalent, with the added benefit
of supporting depsolve jobs where we don't want base repositories to be
used by all depsolves.
Add custom marshaller for DepsolveJob that serialises the struct into a
format compatible with both the new and old formats. The format on the
wire is a superset of both the new and old format and can be
deserialised into either while retaining all information.
Move package set chain collation to the distro package and add
repositories to the package sets while returning the package sets from
their source, i.e., the ImageType.PackageSets() method.
This also removes the concept of "base repositories". There are no
longer repositories that are added implicitly to all package sets but
instead each package set needs to specify *all* the repositories it will
be depsolved against.
This paves the way for the requirement we have for building RHEL 7
images with a RHEL 8 build root. The build root package set has to be
depsolved against RHEL 8 repositories without any "base repos" included.
This is now possible since package sets and repositories are explicitly
associated from the start and there is no implicit global repository
set.
The change requires adding a list of PackageSet names to the core
rpmmd.RepoConfig. In the cloud API, repositories that are limited to
specific package sets already contain the correct package set names and
these are now copied to the internal RepoConfig when converting types in
genRepoConfig().
The user-specified repositories are only associated with the payload
package sets like before.
Run unit tests in GitHub workflows in a Fedora container to enable the
dnf-json tests. Run the tests alone with the `force-dnf` flag to make
sure the tests pass and are not skipped.
Install Go using dnf instead of the GH action. The action seems to
cause issues with the $PATH.
Use the registry.fedoraproject.org container for both unit tests and
pylint on dnf-json.
Requires some reordering of the steps in each workflow and the addition
of `git-core` as a dependency.
Using Fedora 35 instead of latest because of changes in the go build
tool: The new -buildvcs flag causes issues on GitHub actions.
On systems where `dnf` and the Python module aren't available, skip the
unit tests that call into the `dnf-json` script.
A test flag, `-force-dnf` is added to avoid this check and run the tests
unconditionally. This is useful for cases where the sniff check might
fail for wrong reasons or, more importantly, for cases where we want to
be sure the tests are ran and consider a missing `dnf` module to be an
error state (e.g., in CI).
Attach the repository configurations that are specific to a package set
directly on the PackageSet object. This simplifies the Depsolve()
signature and avoids requiring a `nil` when no additional repositories
are required. More importantly, it makes associating repositories to
package sets explicit, no longer relying on matching array indices or
map keys.
Defined a Hash() method on rpmmd.RepoConfig that calculates a SHA-256 ID
for a repository based on its configuration. Identical configurations
should produce the same ID. The Name and ImageTypeTags of a repository
aren't taken into account. These attributes affect a repository's
functional configuration.
This ID lets us change the way we handle repository configurations in a
few places:
- Preparing the depsolve job arguments is simpler since we have
predictable IDs for the repository configurations. We don't need to
rely on the index of a RepoConfig in a list to identify or access it,
which prevented us from building a list of all repository
configurations, since we needed them to be placed in the list in a
certain order.
- Associating packages from the depsolve result with the repository
configuration (in depsToRPMMD) no longer relies on an ID string
converted from and back to an integer index. Repositories define
their own IDs.
- Tests are a bit messier now but the changes simplify the main code, so
it's an acceptable trade-off.
- Fixtures need to change based on the repository configuration for
the test.
- We need to calculate the ID for the repository configuration for
the temporary file server URL.
Remove the single Depsolve function from the dnfjson package and the
depsolve command from the dnf-json tool. The new ChainDepsolve
functions and chain-depsolve command can handle single depsolves in the
same way so there's no need to keep (and have to maintain) two versions
of very similar code.
The ChainDepsolve function (in Go) and chain-depsolve command (in
Python) have been renamed to plain Depsolve and depsolve respectively,
since they are now general purpose depsolve functions.
Search for (and set) the path for dnf-json by checking a few known
locations:
- ./dnf-json: for situations when the tool is ran from the source tree.
This is checked first to prioritise local changes.
- /usr/libexec/osbuild-composer/dnf-json: the default install location
of the script when osbuild-composer is installed.
- /usr/lib/osbuild-composer/dnf-json: the default install location of
the script for distributions which don't use /usr/libexec.
The function panics with an informative error message when it fails to
find dnf-json.
The script would test if the test case generation script when the script
would run normally if the osbuild-dnf-json.service was stopped.
This is no longer necessary.
The rpmrepo mock contains code to be used for testing depsolving. It
creates a file server that serves the metadata in test/data/testrepo and
can be used as a repository for depsolve tests.
The dnfjson tests perform a single depsolve with an expected response.
The chain depsolve tests perform multiple depsolves that should produce
the same expected response:
- Single transaction using the ChainDepsove() function
- Two transactions for the same packages split in two with no extra
repositories
- Two transactions for the same packages split in two with the main
repository redefined
dnfjsontest: squash
All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
The cases are directly copied (or lightly adapted) from
rpmmd_mock/fixtures.
The purpose of the mocks/dnfjson package is to create files with data
for testing the dnfjson package without the need to call the dnf-json
script. Each public function creates a file with test responses in the
same format as the dnf-json script's responses (either valid results or
errors). The dnfjson.Solver can be configured to call the new
./cmd/mock-dnf-json program with the test data file as an argument and a
valid dnf-json request for input. The mock-dnf-json checks the input
request for unknown fields before responding with the contents of the
file.
Each test case file contains two responses, one for each command
supported by dnf-json: "depsolve" and "dump". mock-dnf-json responds
with the appropriate data based on the command in the request. This is
necessary for tests that require both commands in the same call, e.g.,
tests of the weldr API's moduleInfoHandler() which fetches a package
list and then needs to depsolve a subset of those packages.
There are also cases when we want one of the two responses to be an
error. The mock-dnf-json program will return with an error code if it
can successfully unmarshal the intended response into the dnfjson.Error
type.
- Removed server class and handlers
The dnf-json Python script will no longer run as a service. In the
future, we will create a service in the Go package that will handle
receiving requests and calling the script accordingly.
- Removed CacheState class
- Added standalone functions for setting up cache and running the
depsolve
- Validate the input before reading
- Print all messages (status and error) to stderr and print only the
machine-readable results to stdout (including structured error)
The status messages on stderr are useful for troubleshooting. When
called from the service they will appear in the log/journal.
- Catch RepoError exceptions
This can occur when dnf fails to load the repository configuration.
- Support multiple depsolve jobs per request
The structure is changed to support making multiple depsolve
requests but reuse the dnf.Base object to make chained (incremental)
dependency resolution requests.
Before:
{
"command": "depsolve",
"arguments": {
"package-specs": [...],
"exclude-specs": [...],
"repos": [{...}],
"cachedir": "...",
"module_platform_id": "...",
"arch": "..."
}
}
After:
{
"command": "depsolve",
"cachedir": "...",
"module_platform_id": "...",
"arch": "...",
"arguments": {
"repos": [{...}],
"transactions": [
{
"package-specs": [...],
"exclude-specs": [...],
"repo-ids": [...]
}
]
}
}
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>