Attach the repository configurations that are specific to a package set
directly on the PackageSet object. This simplifies the Depsolve()
signature and avoids requiring a `nil` when no additional repositories
are required. More importantly, it makes associating repositories to
package sets explicit, no longer relying on matching array indices or
map keys.
Defined a Hash() method on rpmmd.RepoConfig that calculates a SHA-256 ID
for a repository based on its configuration. Identical configurations
should produce the same ID. The Name and ImageTypeTags of a repository
aren't taken into account. These attributes affect a repository's
functional configuration.
This ID lets us change the way we handle repository configurations in a
few places:
- Preparing the depsolve job arguments is simpler since we have
predictable IDs for the repository configurations. We don't need to
rely on the index of a RepoConfig in a list to identify or access it,
which prevented us from building a list of all repository
configurations, since we needed them to be placed in the list in a
certain order.
- Associating packages from the depsolve result with the repository
configuration (in depsToRPMMD) no longer relies on an ID string
converted from and back to an integer index. Repositories define
their own IDs.
- Tests are a bit messier now but the changes simplify the main code, so
it's an acceptable trade-off.
- Fixtures need to change based on the repository configuration for
the test.
- We need to calculate the ID for the repository configuration for
the temporary file server URL.
Remove the single Depsolve function from the dnfjson package and the
depsolve command from the dnf-json tool. The new ChainDepsolve
functions and chain-depsolve command can handle single depsolves in the
same way so there's no need to keep (and have to maintain) two versions
of very similar code.
The ChainDepsolve function (in Go) and chain-depsolve command (in
Python) have been renamed to plain Depsolve and depsolve respectively,
since they are now general purpose depsolve functions.
The rpmrepo mock contains code to be used for testing depsolving. It
creates a file server that serves the metadata in test/data/testrepo and
can be used as a repository for depsolve tests.
The dnfjson tests perform a single depsolve with an expected response.
The chain depsolve tests perform multiple depsolves that should produce
the same expected response:
- Single transaction using the ChainDepsove() function
- Two transactions for the same packages split in two with no extra
repositories
- Two transactions for the same packages split in two with the main
repository redefined
dnfjsontest: squash
All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
The cases are directly copied (or lightly adapted) from
rpmmd_mock/fixtures.
The purpose of the mocks/dnfjson package is to create files with data
for testing the dnfjson package without the need to call the dnf-json
script. Each public function creates a file with test responses in the
same format as the dnf-json script's responses (either valid results or
errors). The dnfjson.Solver can be configured to call the new
./cmd/mock-dnf-json program with the test data file as an argument and a
valid dnf-json request for input. The mock-dnf-json checks the input
request for unknown fields before responding with the contents of the
file.
Each test case file contains two responses, one for each command
supported by dnf-json: "depsolve" and "dump". mock-dnf-json responds
with the appropriate data based on the command in the request. This is
necessary for tests that require both commands in the same call, e.g.,
tests of the weldr API's moduleInfoHandler() which fetches a package
list and then needs to depsolve a subset of those packages.
There are also cases when we want one of the two responses to be an
error. The mock-dnf-json program will return with an error code if it
can successfully unmarshal the intended response into the dnfjson.Error
type.
This package is meant to serve as the interface between osbuild-composer
and the (new, upcoming) dnf-json. It defines structures and functions
for calling the dnf-json commands ("depsolve" and "dump"). The package
uses the rpmmd types to interface with osbuild-composer and converts
them to the necessary representations (for dnf-json) internally. New
types aren't made public unless necessary.
A lot of the functions and types are copied or adapted from the rpmmd
package and those will eventually be removed. The rpmmd package will
remain to manage RPM package representations and conversion functions.
The FetchMetadata() function sorts the packages it will return, as does
the original implementation in rpmmd, but now the sort key is the NVR.
This is to make package order stable when multiple packages have the
same name (multiple version of the same package). This way, the
'builds' arrays of the resulting package infos will also have a stable
order.
The request and result structures differ from the current implementation
of dnf-json. The change is meant to simplify handling multiple
depsolves with the same dnf.Base object and the new dnf-json tool will
be made to handle this request structure.
The dnf-json command is configurable and supports command line arguments
if necessary.
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
Keeping the expected responses in a separate file and formatted makes
them easier to read, write, and update.
This commit doesn't move all the responses. It focuses on the ones that
are the hardest to work with (the ones that are thousands of characters
long).
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
Add support for building images for the Azure marketplace: add a
new image type "azure-rhui" that can be used to build images
tailored to the Azure marketplace.
This code is based on the corresponding image type in 8.6.
NB: does not have systemd-resovled (following RHEL 9 defaults)
Since the `api.sh` test case is using random GCE zone from a random GCE
region which name starts with the `GCP_REGION` CI environment variable.
Since the used region name is not known to the `cloud-cleaner`, it has
to iterate over all potential GCE regions and their zones. We can not
simply filter the VM instance name a list of instances, because any
`instances` API call requires a zone name to be provided.
Add a new internal `cloud/gcp` package method to list existing GCE
regions based on a provided filter.
The `org.osbuild.udev.rules` stage creates custom udev rules files.
This is a full implementation of the stage and includes information
about valid operators and keys.
A small test suit to test the basic functionality and validation is
included.
Validate incoming requests with openapi3. Remove unsupported
uuid format from the openapi spec. Similarly, change url to uri as
uri is a supported format and url is not.
Co-authored-by: Ondřej Budai <obudai@redhat.com>
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
oneOf means that the body is valid against exactly ONE schema. There's an
issue with AWS EC2 upload options though: It requires region and
share_with_accounts fields. Such a request is also valid AWS S3 upload though
(this one only require region). This means that AWS EC2 upload options will be
always valid against two schemas which violates the oneOf rule.
Let's switch to anyOf and explain this in the openAPI spec.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
It was never required, never used. I honestly think that this was a copy-paste
error, I don't see any reason why a user would have an object reference.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We are interested in the time it takes from a job could be dequeued
until it is, but if a job has dependencies that are not yet finished, it
cannot be dequeued.
Change the logic to measure the time since the last dependency was
dequeued rather than when the job was queued.
The purpose of this metric is to have an alert fire in case we have too
few workers processing jobs.
Building `tar` image for `s390x` on RHEL-84 ends with panic:
"s390x image must have a partition table, this is a programming error"
A tar image should not need a partition table, so this error does not
make sense.
I think that we can spare the users of clouadpi of writing "rhsm": "false"
into the requests so I decided to make this property optional and default
to false.
This is nice because it matches the behaviour of Weldr repositories and
sources so we can also use test/data/repositories without any changes after
openapi validation is enabled.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Added code in fedora/pipelines.go to add the subformat field in the
manifests
Added manifests for f34 and f35 for x86_64 only (image type not
available in aarch64)
* Removed specific function that packaged the fedora cloud package group to avoid collision between fedora-identity-cloud and fedora-identity-basic packages. With the introduction of the PackageSetChains() it is no longer necessary to filter the packages