We would benefit from having support for 9.1 downstream so let's add it in
the form of an alias. This is a bare minimum for having a proper 9.1 support.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
In the internal deployment, we want to talk with composer over a http/https
proxy. This proxy adds new composer.proxy field to the worker config that
causes the worker to connect to composer and the oauth server using
a specified proxy.
NB: The proxy is not supported when connection to composer via unix sockets.
For testing this, I added a small HTTP proxy implementation, pls don't
use this in production, it's just good enough for tests.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Just so we don't need to care about all the server-side setup in individual
test cases and we can just reuse the setup.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Firstly, let's use t.TempDir(), it's less code.
Secondly, let's remove all the code that touches distributions, we can just
use random values, both worker server and client actually do't inspect
any values so they can be completely random.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This test actually verifies that the client code for OAuth works. As this was
the only code that tests client in the file, I think it deserves its own one.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Whenever we create a new mountpoint due to a user customization,
ensure the layout uses LVM, i.e. convert plain layouts to it, if
needed. This does not apply to rpm-ostree based systems.
Add "lvm2" to the build pipeline and thus generate new manifests
and image infos.
Adjust the existing tests that assumed we can not create more
than 4 partitions on mbr layouts, since that is now not true
anymore.
This is a port from rhel86, commit 63aa155
The change in osPipeline() is required now to fix the Prefix for the
bootloader specification when LVM is used. The unspecified Prefix, which
was previously used for all cases, defaults to "/boot". When the layout
is converted to LVM, a boot partition is created and the BLS Prefix
should be set to "".
In the case where we don't have a partition table, the BLS stage is not
needed, but it was done unconditionally before, so keep the default
image definitions unchanged.
Co-Authored-By: Achilleas Koutsou <achilleas@koutsou.net>
/ and /usr have minimum sizes defined (1 GiB and 2 GiB respectively).
When /usr is not defined, the minimum size of /usr gets added to the
minimum size for /.
This new test runs through a few scenarios and checks whether the sizes
fit.
This tests that the clampFSSize() function ensures all user-defined
mountpoints are at least 1 GiB.
Added a blueprint with < 1 GiB minsizes to test this.
Testing all blueprints in TestCreatePartitionTable() now.
Currently, we only specify a minimum size for
- `/` (1 GiB), and
- `/usr` (2 GiB).
This ensures that
- a separate `/usr` partition is at least 2 GiB,
- `/` is always at least 1 GiB,
- if `/usr` it not a separate partition, `/` is at least 3 GiB.
We could (or should), in the future, make it possible for image types to
override this mapping as part of their default config, for example, if
an image type by default requires a larger `/usr`.
Makes the test values more readable (without needing comments).
Some values in the default partition table were fixed, e.g., cases where
we had `Size: 1024000, // 500 MB`.
When the partition table did not have a boot partition, we created it
but then _unconditionally_ returned, which meant that we did not create
the LVM skeleton and wrap the root partition. Properly handle this case
and also re-initialize the `rootPath` in this case since we change the
underlying `Partition[]` array in `PartitionTable` object. Add an extra
blueprint with only one customization which exposes this bug.
Co-Authored-By: Achilleas Koutsou <achilleas@koutsou.net>
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Use `rpmmd.DepsolvePackageSets()` in Weldr API compose request handler,
instead of `rpmmd.Depsolve()`.
Extract common code from `API.allRepositories()` and
`API.allRepositoriesByImageType()` to a new method
`API.payloadRepositories()`.
Modify `API.allRepositoriesByImageType()` to return payload repositories
(repositories defined by user) as a separate slice to enable the use of
`rpmmd.DepsolvePackageSets()`, which requires the package-set-specific
repositories to be passed separately.
Keep using `rpmmd.Depsolve()` in Weldr where appropriate. The
implementation depsolves various simple package sets for multiple API
request handlers and it does not make sense to complicate the code by
moving to `rpmmd.DepsolvePackageSets()`.
Return dummy values from the following methods:
- PackageSets
- PayloadPackageSets
- PackageSetsChains
Use package set names commonly used by recent distro definitions.
Package sets are based on values used by rpmmd mock implementation.
Adjust two Weldr API unit test check for the dummy values. Without
this fix, these unit tests would start failing after the move to
`rpmmd.DepsolvePackageSets()` in Weldr API compose handler.
Add a convenience method `DepsolvePackageSets()` to the `RPMMD`
interface. The method is expected to depsolve all provided package sets
in a chain or separately, based on the provided arguments, and return
depsolved PackageSpecs sets.
The intention is to have a single implementation of how are package sets
depsolved and then use it from all places in composer (API and tools
implementations).
Adjust necessary mock implementations and add a unit test testing the
new interface method implementation.
Introduce a new method `PackageSetsChains()` to the `ImageType`
interface, which returns a named lists of package sets, which should be
depolved together in a chain.
Extend all distro implementations with the new method.
Add a unit test ensuring that if an image type defines some package set
name chains, that all of the listed package set names are present in the
package set map returned by the image type.
The method is currently not used anywhere. This is a preparation for
switching from current way of depsolving to the chain depsolving.
Add a new `rpmmdImpl` method `chainDepsolve`, which is able to
depsolve multiple chained package sets as separate DNF transactions
layered on top of each other.
This new method allows to depsolve the `blueprint` package set on top of
the base image package set (usually called `packages`).
Introduce a helper function `chainPackageSets` for constructing
arguments to the `chainDepsolve` method based on the provided arguments:
- slice of package set names to chain as transactions
- map of package sets
- slice of system repositories used by all package sets
- map of package-set-specific repositories
Extend `dnf-json` with a new command `chain-depsolve` allowing to
depsolve multiple transaction in a row, layered on top of each other.
Add unit tests where appropriate.
New function that ensures that a partition can hold the total sum of all
the required sizes of specific directories on the partition. The
function sums the required directory sizes grouped by their mountpoint
and then resizes the entity path of that Mountable.
The generated gcp name had an invalid `.tar.gz` extension. This
extension still needs to be supplied for the object name however.
The integration tests supply the image name rather than relying in the
generated one, which is why this slipped through.
Add support for importing the GCE image into GCP using Weldr API. The
credentials to be used can be specified in the upload settings and will
be then used by the worker to authenticate with GCP.
The GCP target credentials are passed to Weldr API as base64 encoded
content of the GCP credentials JSON file. The reason is that the JSON
file contains many values and its format could change in the future.
This way, the Weldr API does not rely on the credentials file content
format in any way.
Add a new test case for the GCP upload via Weldr and run it in CI.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Refactor the handling of GCP credentials in the worker to be equivalent
to what is done for AWS. The main idea is that the code decides which
credentials to use when processing each job. This change will allow
preferring credentials passed via upload `TargetOptions` with the job,
over the credentials configured in worker's configuration or the default
way of authenticating implemented by the Google library.
Move loading of GCP credentials to the internal `gcp` library into
`NewFromFile()` function accepting path to the file with credentials.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Delete all internal `cloud/gcp` API related to importing virtual images
to GCP using Cloud Build API. This API is no longer needed.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce a new `ComputeImageInsert()` method for importing images into
GCP. It uses the `compute.Images.Insert()` API [1], which has many
advantages over the currently used way of importing images using the
CloudBuild API. The advantages are mainly that the image is imported as
is and no additional cache files or VMs are created as part of the
import process. Therefore there is no need to do additional cleanup of
cache files after importing the image.
In addition, the import itself is approximately 30% faster for RHEL
images when using the `Insert()` call.
Nevertheless the `Insert()` call accepts only gzip-ed tarball with a RAW
disk, unlike the `Import()` call, which accepts basically any virtual
disk format.
[1] https://cloud.google.com/compute/docs/reference/rest/v1/images/insert
Signed-off-by: Tomas Hozza <thozza@redhat.com>