The search response from mocks/dnfjson is a map of responses indexed by the
comma-separated list of packages and globs being requested. Add support
for this.
The search command is more complicated than depsolve and dump. It needs
to return results based on the requested package names and globs.
Add a number of mock responses for the new search command, including
search results, all packages, and error responses that are triggered by
using special package names: nonexistingpkg, badpackage1, baddepsolve.
Instead of fetching all available packages from dnf-json and then
searching the results this uses SearchMetadata when a package name or
glob is passed to the API. It only uses FetchMetadata when fetching
the full list of packages.
This also fixes a bug where the error response to a projects/info
request used the id of 'ModulesError'. It now uses 'ProjectsError'.
Use the DNF query API
(https://dnf.readthedocs.io/en/latest/api_queries.html) to quickly
return results matching a glob pattern. Multiple package glob results
are combined into a single response.
This adds a search dict to the arguments. 'packages' is a list of package
names or globs to search for.
An optional 'latest' boolean will return only the latest NEVRA instead
of all matching builds in the metadata.
eg.
"search": {
"latest": false,
"packages": ["tmux", "vim*", "*ssh*"]
},
Since the oscap remediation stage in osbuild runs
the oscap package in `chroot`, it is necessary to
install the `openscap-scanner` package to the image
itself rather than the build root.
The module is not present in official RHEL-9.1 ISO image and it is
causing boot issues when used with newer content. HTTP boot is
not affected by this change and works as expected.
this is no longer needed b/c the nightly CI jobs are now using the same
json definitions as the regular CI jobs, just changing baseurl. See
previous commit.
Having the GPG check enabled for Google repos in `gce*` images will make
DNF try to import the relevant keys when upgrading, downgrading or
installing any packages from the repo. However due to Google still using
SHA-1 for GPG keys used to sign their RPMs, importing it will make any
transaction that includes such RPM to fail.
Disabling the GPG check will ensure that DNF won't attempt to import
Google GPG keys.
Related to https://issuetracker.google.com/issues/223626963
The repo is not needed any more, because the Google Cloud SDK is not
installed in the images by default. If anyone wants to install the SDK,
they can add the appropriate repo definition.
The repo is not needed any more, because the Google Cloud SDK is not
installed in the images by default. If anyone wants to install the SDK,
they can add the appropriate repo definition.
The Google SDK ships pre-compiled binaries. It is undesirable to install
it by default in `gce` and `gce-rhui` in its current shape. Also not
installing it does not anyhow affect the RHEL integration as the guest
OS in GCP.
The Google SDK ships pre-compiled binaries. It is undesirable to install
it by default in `gce` and `gce-rhui` in its current shape. Also not
installing it does not anyhow affect the RHEL integration as the guest
OS in GCP.
Extract the application into a utility method on `PartitionTable`.
In order for it to be usable for the first and second pass it does
take a `create` argument that controlls whether new partitons will
be created or return.
Since the LVM support was added to all distros, our disk
related code is adaptive, i.e. we will set the correct BLS
and grub2 prefix if there a `boot` partiton is present in
the layout after all customizations happen, which includes
LVMification.
One thing that was not yet fully working was layouts that
do not yet have a `/boot` partition but allow LVMification.
In that case `NewPartitionTable` and if `/boot` was the
first (or only) customization, would LVMify the partition
which in turn would create the `/boot` partition; but after
`newPT.ensureLVM()` the call to `newPT.createFilesystem`
with `/boot` would try to create another `/boot` mountpoint.
In order to deal with this situation correctly we are now
using a two phase approach: 1) enlarge existing mountpoints
and collect new ones. 2) if there are new ones and LMVify
was allowed, switch to LVM layout. Do a second pass and now
create or enlarge existing partitions, handling `/boot` in
the process.
Replace the simple allow list of paths with the more sophisticated
path policies. It enables us to e.g. allow one path but not any
sub-path. This will be useful for `/boot` where we want to allow
its customization but not any sub-path because that might actually
break booting.
Build a new path policy struct, ased on the new path trie struct.
It is designed to be able to store policies for paths. A Check
method can then be used to look up the policy for a given path
based on the defined policies.
Add a simple implementation of a path trie structure that can be
used to look up assoicated data for any given path. The constructor
will build the trie from a dict of paths to associated data. Later
modification is currently not support. Add tests for it creation
and lookup.
Fedora is using 'tmpfs' as /tmp and that is based on the size of RAM.
That is not enough in case of medium Openstack machines. Changin to use
/var/tmp which is backed by a drive resolves this.
This test used to spawn two VMs at the same time which requires more
memory than the Openstack ci medium runner can provide. We want to be
using only medium runners so this change is necesasry to allow that.
There are several reasons a CI job can fail, mostly infra issue,
openstack issue, other random issues which are not test failures and so
restarting once in case of failure should reduce the ammount of time
people are investigating these test unrelated failures. Also add
interruptibble:true to init to make it actually work for the rest of the
jobs.
The execution time seems to be 10% longer at worst and this will allow
us to safely increase the number of concurent Openstack jobs by 75 %
(from 40 to 70)
Extend the implementation of mock openid server to take the `grant_type`
into consideration for the `/token` endpoint.
In addition to the previously supported `refresh_topen`, the
implementation now supports also `client_credentials`.
This is necessary to make it possible to use the mock server in
the `koji-osbuild` CI, because the builder plugin uses
`client_credentials` to get access token.
The implementation behaves in the following way:
- For `refresh_token` grant type, it takes the `refresh_token` value
from the request and adds it to the `rh-org-id` field in the custom
claim, which is part of the returned token.
- For `client_credentials` grant type, it takes the `client_secret`
value from the request and adds it to the `rh-org-id` field in the
custom claim, which is part of the returned token.
Requests without the supported `grant_type` set are rejected.
Modify affected test cases to specify `grant_type` when fetching a new
access token.
Add integration tests for oscap customizations.
This tests only the most basic case of oscap remediation.
Mountpoints and additional packages are not added since
this varies between distros and OpenSCAP profiles
i.e. additional blueprints customizations would need
to be specified for each oscap profile to ensure
best results.
Add basic validation to ensure that the oscap
customizations are valid and required fields
have been provided. The validation also ensures
that the manifest generation errors out if
oscap customization has been enabled for older
or unsupported distros.
Add a package with the constants of the
valid oscap profiles. Add a function to
validate the available profiles against
an allow map of supported profiles. The
allowed function checks for both exact
matches and shorthand versions of the
oscap profiles.
To check embedding containers via the cloud API works, embed a
known test container from our gitlab CI and check that it is
indeed embedded in the image by pulling the commit and poking
into the container storage.