Moved the function that searches for the boot partition index to the
PartitionTable struct as a method. The method returns -1 if not found
and it's now the responsibility of the caller to handle the case where
it is not found.
To avoid packages specified in a blueprint from conflicting with exclude
lists, we depsolve blueprint packages separately and pass them into the
Manifest generator under the new "blueprint" package set key.
This approach has the added benefit that dependencies of packages
specified in the blueprint are not subject to exclusion in addition to
the explicitly named packages.
The OS pipeline which installs the packages for the base system merges
the two package sets before running the RPM stage. The signature of the
function is changed to explicitly require blueprint packages be
specified (though `nil` or empty slice is valid).
The kernel selection test is adapted to merge the package sets before
counting kernel package.
Adaptation of changes in
https://github.com/osbuild/osbuild-composer/pull/1349
- Cleaned up distro-specific edge package sets
- Added edge package set merging in PackageSets() function
- Edge image type definitions are no longer arch specific, just like the
other image types
- Move all package set definitions to package_sets.go file.
- Do not append image package sets to arch and distro sets; this happens
automatically in PackageSets() method.
- Fix architecture-specific package sets (boot and build where
available).
Live image pipeline: Creates the live image in a file through a loopback
device.
Stages:
- truncate: create the file to hold the image
- sfdisk: partition the device
- mkfs.fat, mkfs.xfs: create the filesystems
- copy: copy the tree from the previous pipeline (the OS pipeline) into
the directories where the partitions are mounted
- grub2.inst: install the bootloader
QEMU pipeline: Convert the live image from the previous pipeline to a
qcow2 image.
coreStages() was meant to produce the stages that are common between all
image types. Since there are too many exceptions and differences between
traditional and edge image types, keep the two pipelines separately
instead.
Moved SELinux stages to the end of each pipeline.
Renamed the argument to have clearer (and correct) meaning and added a
function docstring to describe the purpose.
The argument is set to 'true' in the build pipeline.
Previously a bad error code was returned, fixes#1477.
Testing:
I have two test cases to test the solution. The first is a request that
makes depsolve crash by changing the dnf-json script by an almost empty
one that only throws an exception. The second one fails because it
requests a non existing package. The former ends with a 500 error and
the later with a 400.
----8<-----
HTTP/1.1 500 Internal Server Error
Failed to depsolve base packages for ami/x86_64/centos-8: ailed to
depsolve base packages for ami/x86_64/centos-8: unexpected end of JSON
input
----8<-----
HTTP/1.1 400 Bad Request
Content-Length: 226Failed to depsolve base packages for
ami/x86_64/centos-8: DNF error occured: MarkingErrors: Error occurred
when marking packages for installation: Problems in request:
missing packages: jesuisunpaquetquinexistepas_idonotexist
Some osbuild stages added as part of PR#1525 declare unexported types
which are complete copies of types with custom MarshalJSON method,
to prevent recursion when marshalling to JSON.
Modify relevant osbuild stages to use type alias instead of declaring a
complete type copy.
https://github.com/osbuild/osbuild-composer/pull/1525
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The `api.sh` test currently always defaults to "<REGION>-a" zone when
creating instance using the built image. The resources in a zone may get
exhausted and the solution is to use a different zone. Currently even a
CI job retry won't help with mitigation of such error during a CI run.
Modify `api.sh` to pick random GCP zone for a given region when creating
a compute instance. Use only GCP zones which are "UP".
The `cloud-cleaner` relied on the behavior of `api.sh` to always choose
the "<REGION>-a" zone. Guessing the chosen zone in `cloud-cleaner` is
not viable, but thankfully the instance name is by default unique for
the whole GCP project. Modify `cloud-cleaner` to iterate over all
available zones in the used region and try to delete the specific
instance in each of them.
Make `ComputeZonesInRegion` method from the `internal/cloud/gcp` package
exported and use it in `cloud-cleaner` for getting the list of available
zones in a region.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Some distributions do not have repositories and therefore cannot be
built. This filters the list of supported distributions by checking for
repos when starting up. All other requests use the api.distros list or
api.getDistro() function.
The name of the distro you get from distros.FromHost() may not match any of
the names in the registry's list. Use the actual name of the distro
instead of the mangled name.
Also removes api.distro which is unused.