The image already used base partition table with necessary layout to
support hybrid boot mode, so the change was just a matter of modifying
the associated platform.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Fedora distro definition contained an empty `s390x` architecture with no
image types added to it. Let's remove it from the distro definition,
since it's adding no value in its current form.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Explicitly specify the AMI boot mode in AWS upload target in Cloud API
compose handler. The value is determined based on image type's boot
mode.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Explicitly specify the AMI boot mode in AWS upload target in Weldr API
compose handler. The value is determined based on image type's boot
mode.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add an optional `BootMode` field to the AWS target options.
This allows to signal to worker the intended boot mode to use when
registering the AMI in AWS. If not specified, the default behavior is
preserved, specifically that the boot mode will be determined by the
default boot mode of the instance provisioned from the AMI.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
When the AMI is being registered from a snapshot, the caller can
optionally specify the boot mode of the AMI. If no boot mode is
specified, then the default behavior is to use the boot type of the
instance that is launched from the AMI.
The default behavior (no boot type specified) is preserved after this
change.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
As a preparation to be able to determine the image boot mode when
importing it to the target environment (e.g. AWS), expose the
information on the `ImageType` level.
The image boot mode is determined based on the platform associated with
it.
The new method is not yet used by any code, but will be eventually used
by osbuild-composer server to set the proper value in the upload target
options for the worker. The worker will be then able to import the image
in the proper way to the cloud environment.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This was the intention since the beginning (based on images built by
Google. Clean up code and mark the platform associated with GCE image
types as UEFI-only.
The only missing part is the default partition table used by the GCE
image, which is shared with other image types and still contains the
BIOS boot partition. I added a TODO comment to preserve this
information, but kept things as they are for now to not have to
introduce a new set of GCE-specific base partition tables.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Do not extend the image base package set with list of packages needed
for booting the OS, returned by `bootPackageSet()` based on the specific
image type, architecture and its boot type. This duplicated
functionality that is already handled by the platform associated image
and all the necessary packages are provided by the platform's
`GetPackages()` method and added to the base package list.
This reflects changes which were done in Fedora when it was ported to
the "new" image definitions, but were not ported to RHEL.
RHEL-8 GCE image type note:
After a previous change, the image boot type is now determined by the
associated platform and as a result, the GCE image type is marked as
supporting hybrid boot type, although it was meant to be UEFI only. As a
result, the package list returned by `bootPackageSet()` and previously
appended would contain grub2 BIOS-related packages. This is still the
case after this change, because the platform's `GetPackages()` method
will return the same list of packages in this case. However, the
platform used by RHEL-8 GCE image type has its `GetPackages()`
overridden by a different implementation not containing grub2 BIOS
related packages. For some reason, this change is not present in RHEL-9.
As a result, the grub2 BIOS related packages disappeared from the RHEL-8
GCE image package set, while there was no change in RHEL-9.
Keep the GCE image as is for now and make it an UEFI-only in a follow
up.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Remove the `legacy` from `architecture` struct, since this information
is already contained in the platform associated with the image.
This reflects changes which were done in Fedora when it was ported to
the "new" image definitions, but were not ported to RHEL.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Remove the bootType from imageType and architecture structures and
determine the image boot type based on its associated platform.
This reflects changes which were done in Fedora when it was ported to
the "new" image definitions, but were not ported to RHEL.
GCE image type note:
This change has a side-effect on the GCE image type. It was meant to be
UEFI only, but the previous mixture of bootType set in the imageType and
the platform used for it made it a weird combination of almost hybrid
boot type, but not completely. For now, the grub2 BIOS-related packages
are added to the image content as a result. Eventually, the platform
used for the image should be changed to not support BIOS and the image
should also not have BIOS partition at all.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The s390x platform definition would previously always return `false`
when calling its `GetZiplSupport()` method. This was obviously not
correct.
The method is meant to suite a similar purpose as `GetBIOSPlatform()`
and `GetUEFIVendor()` on BIOS / UEFI enabled platforms.
Change the S390X platform struct to contain `Zipl` member instead of
`BIOS`, which is technically more correct. Make sure that the value
set in the `Zipl` struct member is returned by `GetZiplSupport()`.
Ensure that `FirmwarePackages` from `BasePlatform` are added to the list
of packages returned by `GetPackages()`.
Adjust distro definitions using the `S390X` platform.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
We sometimes see the following error in the logs:
Fault(1000): upload path exists: /mnt/koji/work/osbuild-cg/osbuild-composer-koji-082e1c88/Fedora-IoT-38.raw.xz.
I think this happens when we retry the upload call of the first chunk due to
random network issues. The solution is to always upload in the overwriting
mode, which ignores the already existing file.
See https://pagure.io/koji/blob/175ecb5e8f3d45a1d244b227eb889321e5dd0a29/f/kojihub/kojihub.py#_15522
This is safe because:
1) We use UUIDs in the filename, which means that there should never be a real
conflict.
2) The overwriting mode is actually the default mode in koji, see
https://pagure.io/koji/blob/175ecb5e8f3d45a1d244b227eb889321e5dd0a29/f/koji/__init__.py#_3342
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
CGImport quite often fails with the following error:
Fault(1000): File size 735051776 for Fedora-IoT-38.raw.xz (expected 738785372)
doesn't match. Corrupted upload?
When I inspect the file manually, everything seems fine, though.
I believe that this because of NFS inconsistency when multiple DNS-balanced
kojihubs are used in the setup (which is what Fedora uses). The addded
loop implements a retrying mechanism for the CGImport call to try again
whenever we see this issue.
Note that this isn't caught by other HTTP retrying mechanism because a failed
XMLRPC call returns code 200.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Remove firewalld from the base package set for Azure and add it to all
the image-specific package sets except EAP (and explicitly exclude it).
Remove firewalld from the base image config for Azure and add it to all
the image-specific configs.
Test manifests updated.
Manifest changes for non-EAP image types is only the enabled-services
reordering: firewalld is last because it is appended to the base config.
See COMPOSER-1859
Sorted lists of strings make it easier to add and remove elements
without needing to think about the order, making diffs easier.
The sorting was done using the 'sort' coreutils command with LC_ALL=C.
Validate custome repository filenames in order to
avoid unexpected `5xx` errors when building an image.
Before this the filename was only validated at the
yum repo stage, which was causing unexpected errors.
Replace the dnf-json `Hash()` function in
favour of a hash calculated using the
`rpmmd.RepConfig.Hash()` function. The
`repoHash` field is populated when converting
a `rpmmd.RepoConfig` to `dnfjson.repoConfig`
object. The `dnfson.repoConfig.Hash()` function
then returns the `repoHash` field instead of
re-calculating the hash.
Create an osbuild yum repository from
`rpmmd.RepoConfig`. Additionally, remove
pointers from the `YumRepository` struct,
since this will add values for fields that
weren't explicitly set by the user in the
repo customizations.
Create some utility functions that will be used for implementing
custom repo configuration files. This commit adds these functions:
- a helper to get the filename of a custom repo, or the
`<repo-id>.repo` if the filename is empty
- a function to convert the custom repos to a map of `RepoConfig`.
This function also creates an `fsnode.File` for each inline gpg
key set in the customizations and swaps the inline key for the
file path. The function returns the map of `RepoConfig` and a list
of `fsnode.File` containing the inline gpg keys.
Convert some of the fields in the `RepoConfig` struct
to pointers. Since `RepoConfig` will be used to convert
custom repositories to an array of `osbuild.YumRepository`,
we need to ensure that fields that are not set explicitly
are not saved to the `/etc/yum.repos.d` repository files.
Update the internal RepoConfig object to
accept a slice of baseurls rather than a
single field. This change was needed to
align RepoConfig with the dnf spec [1].
Additionally, this change adds custom json
marshal and unmarshal functions to ensure
backwards compatibility with older workers.
Add json tags to the internal rpmmd config
since this is serialized in dnfjson.
Add unit tests to check the serialization
is okay.
[1] See dnf.config
Set the LocalName for the spec using a separate argument in the
NewSpec() constructor instead of reusing the `source` arg.
The name is already available in the calling scope in the client's
Resolve() method.
If the LocalName is an empty string, default to the remote (source)
reference. This is a change from the previous behaviour which only used
the base source.Name(). The full source corresponds to the
user-provided source value, which includes any specified tag or digest.
The `name` argument which is used in the `Resolve()` function should
always correspond to the user-provided container name.
This reverts commit 2b1facb44d.
The GPG key is now present in the RHUI client RPM, so there is no need
to not import it during the image build.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
We had this weird condition in code that prevented composer to create groups
with the same name as a user has. This unfortunately means that you are not
able to create a user with a primary group with a certain GID that has the
same name as the user. There's the gid field in the user customization,
but it requires that the group already exists.
In order to allow that, we need to remove the condition. From now on, it's
possible to create groups with the same name as a user has, which can be used
to create primary groups with a custom gid.
Note that the lorax compatibility behaviour was actually wrong. When lorax was
given a custom gid for a user, it didn't require the gid to exist. When it
didn't, the group was just created. Thus, we still don't have full backward
compatibility, but at least we now have support for this.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This platform copies more files into `/boot` which are necessary to be
able to boot IoT on some single board computers.
We also immediately set this on the `Aarch64_IoT` platform which needs
u-boot to be placed in the `/boot`.
This closes#3312.
Add a helper function that collects all the manifest list digests from a
list of container specs and returns a FilesInput to be used with the
stage.
Use the function in the OS pipeline when adding containers. The
manifests input to the stage constructor will be empty if there are no
manifest lists in the container specs.
While resolving a manifest list digest, store the list digest to return
with the resolvedIds.
This is done for both types of manifest list:
application/vnd.docker.distribution.manifest.list.v2+json
and
application/vnd.oci.image.index.v1+json
Add the ListDigest to the container Spec struct and all its copies so we
can store list digests when they are available and pass them on to the
appropriate osbuild stages, sources, and inputs.
Copy the value whenever a spec is moved to a different representation.
The skopeo stage in osbuild supports an second optional set of inputs
called `manifest-lists`. This is an array of files, i.e.,
`org.osbuild.files` type input.
To support this we need a new type for the skopeo stage inputs that can
encompass both input types, images and manifest-lists.