This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
When a test script fails in CI, it's often difficult to pinpoint the
exact line in the log where the script failed and the cleanup() function
(trapped on EXIT) begins.
Adding a prominent line (with greenprint where available) at the start
of the cleanup function will make reading logs of failed jobs a lot
easier.
Something odd is happening with the package check and it keeps failing
mysteriously even though the package is clearly in the list.
Changing the verification method to extract `passwd` and `packages` from
the image info file into separate files and grepping those seems to
work.
Reasons for this change:
- Mixed versions of composer and worker aren't a realistic use-case for
the weldr API (on prem) but we do run mixed versions in hosted IB, so
this test is closer to real world scenarios.
- The cloud API runs depsolve jobs in the worker, whereas the weldr API
runs them in composer. By testing the cloud API we also test the
backwards compatibility of the depsolve job.
The change requires osbuild-worker v51 or newer to be able to handle
depsolve and manifest jobs on the worker as well as depsolve chains.
Add support for building images for the Azure marketplace: add a
new image type "azure-rhui" that can be used to build images
tailored to the Azure marketplace.
This code is based on the corresponding image type in 8.6.
NB: does not have systemd-resovled (following RHEL 9 defaults)
It seems rhel-91 qcow2 customize images are out of sync because commit
2beb707 removed the core group from the `format-request-map.json` and
some these said manifests were generated between that commit and the
one that added it back 1ff36bce9.
We currently use a single GCP Compute region when spinning up VMs using
the imported GCE image. As a result, we are often hitting the
'IN_USE_ADDRESSES' quota limit when there are multiple CI jobs running.
Google does not allow us to increase the quota limit any more.
Change the GCP test cases to use the CI `GCP_REGION` variable to list
all GCE regions with available quota and pick a random one from the
list. The `GCP_REGION` value is used as the region name prefix when
filtering available regions. This means that if you specify an exact GCE
region, such as `us-west1`, you'll always get the same region, but if a
GCP multi-region is used, such as `us`, then a random region prefixed
with 'us' will be used.
Added code in fedora/pipelines.go to add the subformat field in the
manifests
Added manifests for f34 and f35 for x86_64 only (image type not
available in aarch64)
* IoT image types now correctly point to the fedora-identity-iot package
* QCOW2, VMDK and OCI types use Fedora Cloud as identity package
* Changed default target for AMI from graphical.target to multi-user.target. This matches the behaviour with the RHEL types, which all target the multi-user.
* Readded the image-info field for some manifests which was missing due to issues regenerating the manifests.
Modify pipelines in all distro definitions to produce stream-optimized VMDK
image.
Regenerate all VMDK test cases.
Bump worker dependency on osbuild to the version supporting VMDK
subformat in both QEMU assembler and stage
This is a bare minimum for our downstream testsuite to pass (otherwise
it will fail on non-existing 8.7 CDN repositories).
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We would benefit from having support for 9.1 downstream so let's add it in
the form of an alias. This is a bare minimum for having a proper 9.1 support.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Whenever we create a new mountpoint due to a user customization,
ensure the layout uses LVM, i.e. convert plain layouts to it, if
needed. This does not apply to rpm-ostree based systems.
Add "lvm2" to the build pipeline and thus generate new manifests
and image infos.
Adjust the existing tests that assumed we can not create more
than 4 partitions on mbr layouts, since that is now not true
anymore.
This is a port from rhel86, commit 63aa155
The change in osPipeline() is required now to fix the Prefix for the
bootloader specification when LVM is used. The unspecified Prefix, which
was previously used for all cases, defaults to "/boot". When the layout
is converted to LVM, a boot partition is created and the BLS Prefix
should be set to "".
In the case where we don't have a partition table, the BLS stage is not
needed, but it was done unconditionally before, so keep the default
image definitions unchanged.
Co-Authored-By: Achilleas Koutsou <achilleas@koutsou.net>
Added a filesystem customization to the qcow2 test case to test that the
filesystem is converted to an LVM layout.
Set overrides for distros that don't support fs customizations.
The purpose of the test is to check that the dnf-json socket can be
started automatically when running the test case generator while the
service or socket isn't enabled/started.
dnf-json will still be used to depsolve the packages and create the
manifest even if the image is not built.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Follow-up to 60db6ad06f
The SHA-1 key is no longer supported in RHEL 9.0. This isn't a problem
for RHEL 8.x in general, but it prevents cross building RHEL 8.x images
on RHEL 9.0, since the host (RHEL 9.0) rpm and openssl cannot import the
older keys and we fail to bootstrap the build root for the new image if
the source repositories use SHA-1 keys.
Related rhbz#2058497 (Comment 18).
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
- 2 space indent
- lists on multiple lines
- newlines at EOF
This was accomplished by simply running each file through `jq` with no
arguments.
It is also equivalent to Python's `json.dump(..., indent=2)` plus the
added newline.