Pipelines that don't require packages didn't need to implement the
serializeStart() method, but now we need to set the resolved ostree
commit spec when a pipelines requires it.
1. Run RHEL for Edge CI on osbuild/rhel-edge-ci repo
2. Use released RHEL 8.8 and 9.2 boot ISO
3. Extend VM memory to 3072 on ostree.sh to fix error
"Overriding memory to 3072 MiB needed for centos-stream9 network install."
4. Install and start firewalld, configure VM network as trusted zone
osbuild can return a json object with details about manifest validation
errors. This adds support for saving those, and printing them when the
Write function is called. eg. when using composer-cli compose log UUID
Includes tests for new behavior.
Creates the 'edge-ami' image type based on edgeRawImage, which generates
a raw image (x86_64, aarch64) ready to upload to AWS EC2.
This 'edge-ami' image type has Ignition support.
Signed-off-by: Irene Diez <idiez@redhat.com>
cloud-init and bash should be everywhere. Thus, there's no point in specifying
them as a customization. Actually, it might mask error if we ever stop
installing bash/enabling cloud-init.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Our package set was quite outdated. Let's sync it with the Fedora Cloud
kickstart:
https://pagure.io/fedora-kickstarts/blob/c3b160775a3b159f949ba931dcb68e520a460e12/f/fedora-cloud-base.ks
I manually compared built images and our image contain kernel, kernel-modules
and linux-firmware above the official package set:
- removing kernel and kernel-modules is tricky, because we always assume that
the default kernel is called kernel.
- removing linux-firmware is a big hack, since it breaks the RPM dependencies
inside the image.
Thus, let's ignore these for now, it's definitely better than before.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Fedora Cloud SIG considers the qcow2 (cloud base) image as a good fit also for
OpenStack. Let's make our openstack image just an alias of qcow2. Once again,
this removes some duplicated code.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Fedora doesn't build a separate AMI image. The ordinary Cloud Base image
is what's used for AWS.
Here's the code that's responsible for uploading ec2 images - it searches
for Cloud raw.xz images:
fcbface137/fedimg/consumers.py (L114)
And here's the pungi configuration showing that qcow2 and raw.xz are actually
built from the same kickstart:
https://pagure.io/pungi-fedora/blob/e080c0702f38c033025889e5fcc8d9fee5bb2026/f/fedora-cloud.conf#_142
Thus, we can just do the same thing: Let's base the AMI on the qcow2 to save
some bytes.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
F38 is already GA and the latest snapshots reflect that (specifically,
they do not contain the 'branched' word in the URL).
Modify F38 repo definitions.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Keep the F36 GA repos on their singleton snapshot.
Keep the latest F36 updates repos snapshot on 20230515, which is the
last snapshot of these repos that we took.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Before, this was done in the PackageSets() function.
The reason for this is that having an ostree ref affects package
selection (for example, it adds rpm-ostree). At the package selection
phase, it doesn't matter what the ostree ref is; it is just used to
determine if a pipeline is for an ostree-based image type and it doesn't
affect non-ostree-based image types because the image functions ignore
it.
This is only needed in the cloudapi now because other places have
switched to using the new order of operations, where the manifest is
generated after the ostree commit is resolved, so it's always added when
needed.
To add the container specs to Serialize(), we need to map them to the
payload (OS) pipeline. We assume the first name in the image type's
PayloadPipelines() list is the OS pipeline, which is true of all image
types right now but might not be necessarily in the future.
This is a temporary workaround. Eventually, the mapping will be set by
the image type itself when we use the container source specs attached to
the Manifest object.
Still not using the new process for generating the manifest exactly.
This commit only replaces the call to PackageSets() with a call to
Manifest() to get the Content.PackageSets. This is essentially the same
as before, when we were initialising the manifest object twice.
The Manifest() function does a tiny bit more work than PackageSets(),
but it's minimal and we gain the benefit of only having a single code
path and, although it's run twice, it's always run in the same way.
Use the new workflow for generating the manifest before resolving
containers.
The resolver function is adjusted to handle a map of container slices,
but with all current use cases, the map should only ever have one key
for the payload (os) pipeline.
Demonstrate the new workflow for resolving containers.
1. First call Manifest().
2. Get container SourceSpecs from manifest struct.
3. Resolve them.
4. Serialize() with resolved container specs.
The changes in the test manifests are just the information about the
container sources (was a slice but is now a map) and the actual manifest
object isn't affected.
The TestDistro_Manifest test in distro_test_common is adapted
accordingly as well.
When creating a Manifest object, collect container SourceSpecs instead
of resolved Specs.
This is the same way we handle packages: The blueprint option is
converted to source specs and attached to the Manifest object during
creation. Later, the SourceSpecs will be resolved to full container
Specs and used during serialization.
Much like the GetPackageSetChains() manifest method, these two new
methods collect the container and ostree source specifications from the
pipelines that support them. Currently, only one pipeline per manifest
contains references to containers or ostree commits, but we collect them
in a map, keyed by the pipeline name, both for consistency with the
package sets and for any potential future changes that may require
differentiating which pipeline a content source belongs to.
The ImageType.PackageSets() function is going away and instead we will
rely on the ImageType.Manifest() function to both prepare the manifest
and return the package sets. The Manifest() function should never be
called without an ostree ref for ostree type images.
Use the new manifest generation procedure in the distro tests.
Use assert instead of require in TestImageTypePipelineNames to continue
running the rest of the subtests after a failure.
Some initialisations (image options and blueprint customizations) had to
be adjusted to work with the ImageType.Manifest() function.
Some helper functions in distro_test_common are no longer necessary and
have been removed.
The TestPipelineRepositories and TestImageTypePipelineNames tests must
be (partially) skipped for image types which specify a workload
(currently only azure-eap7-rhui), because they ignore payload
repositories.
In getPackageSetChain(), the workload repositories did not include the
ExtraBaseRepos.
In serialize(), when creating the rpm stage options (which collects
repository GPG keys), only the base repos were used, which is why we
previously had to merge repositories. Instead of merging repositories
in the calling function in distro, we should keep them separated so that
we can easily distinguish which repositories are only meant for the
blueprint or workload when we need to.
The merging of payload repositories into the os pipeline had the
unwanted side effect of using the payload repos for the first depsolve
in the os chain when instead they should only be used for the second
(the depsolve for the blueprint or workload packages). This wasn't an
issue before because we didn't do the merging in the PackageSets()
function, but now we rely on the Manifest() function for that
functionality instead.
Before the merging of the two functions, the PackageSets() function did
not merge repositories and the repository-to-package-set mapping was
maintained correctly, but the merging was needed in the Manifest()
function so that rpm stage options were generated for all repositories.
With this change, we are removing the merging so that the mapping is
maintained, and will fix the rpm stage option generation in the pipeline
generator.
Use the new manifest generation procedure in the Weldr API.
Updated test distro to include the same packages from the PackageSets()
method in the Manifest.Content.PackageSets.
Use the new manifest generation procedure in the cmd line tools. The
new procedure doesn't rely on ImageType.PackageSets() to compute the
packages for the depsolving. Instead, it calls Manifest() and depsolves
the packages attached to the returned object
(manifest.Content.PackageSets).
Copy the functionality of the ImageType.PackageSets() methods into
ImageType.Manifest() for each distro.
The Manifest() method now collects all package sets and repositories
from the blueprint and image type and after generating the Manifest
instance, calls the GetPackageSetChains() method to attach the computed
package sets to the Manifest before returning it.
The package sets in the call are now renamed to staticPackageSets to
differentiate from the dynamic (computed) package sets that are affected
by the manifest generation.
Pass the entire Blueprint to Manifest() instead of just the
Customizations. The goal is to combine the functionality of the
ImageType.PackageSets() and ImageType.Manifest() methods into one call.