All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
The cases are directly copied (or lightly adapted) from
rpmmd_mock/fixtures.
The purpose of the mocks/dnfjson package is to create files with data
for testing the dnfjson package without the need to call the dnf-json
script. Each public function creates a file with test responses in the
same format as the dnf-json script's responses (either valid results or
errors). The dnfjson.Solver can be configured to call the new
./cmd/mock-dnf-json program with the test data file as an argument and a
valid dnf-json request for input. The mock-dnf-json checks the input
request for unknown fields before responding with the contents of the
file.
Each test case file contains two responses, one for each command
supported by dnf-json: "depsolve" and "dump". mock-dnf-json responds
with the appropriate data based on the command in the request. This is
necessary for tests that require both commands in the same call, e.g.,
tests of the weldr API's moduleInfoHandler() which fetches a package
list and then needs to depsolve a subset of those packages.
There are also cases when we want one of the two responses to be an
error. The mock-dnf-json program will return with an error code if it
can successfully unmarshal the intended response into the dnfjson.Error
type.
- Removed server class and handlers
The dnf-json Python script will no longer run as a service. In the
future, we will create a service in the Go package that will handle
receiving requests and calling the script accordingly.
- Removed CacheState class
- Added standalone functions for setting up cache and running the
depsolve
- Validate the input before reading
- Print all messages (status and error) to stderr and print only the
machine-readable results to stdout (including structured error)
The status messages on stderr are useful for troubleshooting. When
called from the service they will appear in the log/journal.
- Catch RepoError exceptions
This can occur when dnf fails to load the repository configuration.
- Support multiple depsolve jobs per request
The structure is changed to support making multiple depsolve
requests but reuse the dnf.Base object to make chained (incremental)
dependency resolution requests.
Before:
{
"command": "depsolve",
"arguments": {
"package-specs": [...],
"exclude-specs": [...],
"repos": [{...}],
"cachedir": "...",
"module_platform_id": "...",
"arch": "..."
}
}
After:
{
"command": "depsolve",
"cachedir": "...",
"module_platform_id": "...",
"arch": "...",
"arguments": {
"repos": [{...}],
"transactions": [
{
"package-specs": [...],
"exclude-specs": [...],
"repo-ids": [...]
}
]
}
}
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
This package is meant to serve as the interface between osbuild-composer
and the (new, upcoming) dnf-json. It defines structures and functions
for calling the dnf-json commands ("depsolve" and "dump"). The package
uses the rpmmd types to interface with osbuild-composer and converts
them to the necessary representations (for dnf-json) internally. New
types aren't made public unless necessary.
A lot of the functions and types are copied or adapted from the rpmmd
package and those will eventually be removed. The rpmmd package will
remain to manage RPM package representations and conversion functions.
The FetchMetadata() function sorts the packages it will return, as does
the original implementation in rpmmd, but now the sort key is the NVR.
This is to make package order stable when multiple packages have the
same name (multiple version of the same package). This way, the
'builds' arrays of the resulting package infos will also have a stable
order.
The request and result structures differ from the current implementation
of dnf-json. The change is meant to simplify handling multiple
depsolves with the same dnf.Base object and the new dnf-json tool will
be made to handle this request structure.
The dnf-json command is configurable and supports command line arguments
if necessary.
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
Keeping the expected responses in a separate file and formatted makes
them easier to read, write, and update.
This commit doesn't move all the responses. It focuses on the ones that
are the hardest to work with (the ones that are thousands of characters
long).
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
Linter output:
Specify anti-affinity in your pod specification to ensure that the
orchestrator attempts to schedule replicas on different nodes. Using
podAntiAffinity, specify a labelSelector that matches pods for the
deployment, and set the topologyKey to kubernetes.io/hostname.
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
When a test script fails in CI, it's often difficult to pinpoint the
exact line in the log where the script failed and the cleanup() function
(trapped on EXIT) begins.
Adding a prominent line (with greenprint where available) at the start
of the cleanup function will make reading logs of failed jobs a lot
easier.
Something odd is happening with the package check and it keeps failing
mysteriously even though the package is clearly in the list.
Changing the verification method to extract `passwd` and `packages` from
the image info file into separate files and grepping those seems to
work.
Reasons for this change:
- Mixed versions of composer and worker aren't a realistic use-case for
the weldr API (on prem) but we do run mixed versions in hosted IB, so
this test is closer to real world scenarios.
- The cloud API runs depsolve jobs in the worker, whereas the weldr API
runs them in composer. By testing the cloud API we also test the
backwards compatibility of the depsolve job.
The change requires osbuild-worker v51 or newer to be able to handle
depsolve and manifest jobs on the worker as well as depsolve chains.
Add support for building images for the Azure marketplace: add a
new image type "azure-rhui" that can be used to build images
tailored to the Azure marketplace.
This code is based on the corresponding image type in 8.6.
NB: does not have systemd-resovled (following RHEL 9 defaults)
We really only can have one. The one that was used for the generation
of the manifests is kept and the other one removed (although it has
newer repositories).
It seems rhel-91 qcow2 customize images are out of sync because commit
2beb707 removed the core group from the `format-request-map.json` and
some these said manifests were generated between that commit and the
one that added it back 1ff36bce9.
In these graphs p99 isn't very important. If 1% of jobs are slow that's
fine. The p50 and p95 slices are the important ones, so reorder and
recolor the duration graphs to reflect this.
The interval dictates the granularity of the graphs. As the interval
decreases, spikes and dips become more pronounced. 28 days as an
interval doesn't actually show much, reduce this to 6h by default which
is a happy medium.
scheduled cloud cleaner is skipping the default storage account for a
resource group, as this images should get removed. There can be a
situation where this images are not removed and forgotten here. Remove
this skip condition so scc checks also in this storage account.
Since the `api.sh` test case is using random GCE zone from a random GCE
region which name starts with the `GCP_REGION` CI environment variable.
Since the used region name is not known to the `cloud-cleaner`, it has
to iterate over all potential GCE regions and their zones. We can not
simply filter the VM instance name a list of instances, because any
`instances` API call requires a zone name to be provided.
Add a new internal `cloud/gcp` package method to list existing GCE
regions based on a provided filter.
We currently use a single GCP Compute region when spinning up VMs using
the imported GCE image. As a result, we are often hitting the
'IN_USE_ADDRESSES' quota limit when there are multiple CI jobs running.
Google does not allow us to increase the quota limit any more.
Change the GCP test cases to use the CI `GCP_REGION` variable to list
all GCE regions with available quota and pick a random one from the
list. The `GCP_REGION` value is used as the region name prefix when
filtering available regions. This means that if you specify an exact GCE
region, such as `us-west1`, you'll always get the same region, but if a
GCP multi-region is used, such as `us`, then a random region prefixed
with 'us' will be used.