User home directories don't survive the rpm-ostree stage. They are
converted to systemd-tmpfiles via rpm-ostree post-process, but the
contents are left behind, so any keys we add to the authorized_keys file
will be gone.
This stage sets up a first-boot service that writes the user's public
key to the file in the home directory during the first system boot.
In the Anaconda pipeline, the kickstart stage should fetch the commit
we're embedding. It was mistakenly trying to fetch from the URL used to
build the image instead.
New image type that generates a Boot ISO. The ISO contains a RHEL Edge
commit and an installer. On Boot, it sets up a new RHEL Edge system
with the commit.
The RHEL Edge commit (ostree commit) is downloaded during build from a
URL that should be supplied with the compose request. The commit's hash
and URL need to be added to the Sources list in the Manifest.
Unlike other types, the new image type defines its own "build" package
set that is added to the distro and arch build package lists.
We need to add the URL to the manifest as an ostree source repo so that
osbuild can pull the commit to embed it in the boot ISO for the new
rhel-edge-installer image type.
- org.osbuild.anaconda
Configures Anaconda. For now, only enabling kickstart modules is
supported.
- org.osbuild.buildstamp
Creates a buildstamp file, which is required by Anaconda.
- org.osbuild.kickstart
Creates a kickstart file.
- org.osbuild.lorax-script
Uses lorax template helpers to execute a template.
- org.osbuild.bootiso
Prepares a bootable file system tree suitable for writing on an ISO
file system
- org.osbuild.discinfo
Creates a .discinfo file, used by the Anaconda installer.
- org.osbuild.xorrisofs
Uses the `xorrisofs` command line utility to an ISO.
- org.osbuild.implantisomd5
Uses the `implantisomd5` command to implant MD5 checksums into an
ISO.
The org.osbuild.oci-archive stage now supports an arbitrary number of
layers on top of the Base layer. The keys for these layers follow the
pattern "layer.N" (N = 1, 2, 3, ...).
We use a custom marshaller and unmarshaller for the
OCIArchiveStageInputs to handle this. The unmarshaller also validates
the layer keys to match the pattern in the schema.
Recently added stages org.osbuild.sysconfig and
org.osbuild.kernel-cmdline were missing from the Manifest unmarshal
method causing it to fail when trying to unmarshal manifests that
contained them.
When shelling out for a CLI test the error returned from the Start()
command prints the exit code which is not very informative. Capturing
and printing stderr is a lot more useful.
osbuild requires the export flag otherwise it wont produce an artifact.
For the older manifest format (v1), the export value is always
"assembler". For v2 manifests, it is the name of the last pipeline.
If an unknown version number is read the script now fails. This should
help catch manifest changes that may affect test case generation in the
future.
Log output from osbuild has a very different format when using the
new schema. The osbuild1.Result object now supports unmarshalling the
new format and adapting it to the old format.
The most important field to set is the Success field to signal whether
the build succeeded.
Secondarily, it also copies over the output from each stage in order to
provide build job log output through the weldr API.
Since the new format contains multiple pipelines with multiple stages
each, the stages are flattened to fit the old format. A unique name for
each stage is created by prepending the name of the pipeline to its
index in the pipeline and its type.
osbuild now supports using the `--export` flag (can be invoked multiple
times) to request the exporting of one or more artefacts. Omitting it
causes the build job to export nothing.
The Koji API doesn't support the new image types (yet) so it simply uses
the "assembler" name, which is the final stage of the old (v1)
Manifests.
imageTypeS2 implements the distro.ImageType interface buts generates a
Manifest matching the new osbuild v2 schema.
Two new image types are added to the rhel84 distro (x84_64 and aarch64)
for generating OCI containers contain an Edge (ostree) commit and, when
run, start a web serer to serve the commit.
The image type uses the new PackageSets map to define packages (and
excludes) for the image. The old methods (Packages() and
BuildPackages()) are implemented for compatibility with the old
workflow.
The image also defines an extra package set for the container that will
serve the package: "httpd" (and its dependencies).
The distro.ImageType interface has a new method: Exports()
It should return a list of names or IDs of artefacts that should be
exported from osbuild when the job is complete.
For the old image types, this is simply set to "assembler".
Adding new types and adapting copies of all the old types to match the
new Manifest schema:
New types:
- Stages
- org.osbuild.ostree.init
- org.osbuild.ostree.pull
- org.osbuild.ostree.preptree (replaces org.osbuild.rpm-ostree)
- org.osbuild.curl
- Converted from assemblers
The concept of a Build and Assembler stage in gone now. Instead they
are regular Stages like any other.
- org.osbuild.oci-archive
- org.osbuild.ostree.commit
- Sources
- org.osbuild.curl
- org.osbuild.ostree
- Inputs
- org.osbuild.files
- org.osbuild.ostree
Types with changes:
- Stages
- org.osbuild.rpm:
- New input structure for defining packages
- New options
Basically copies:
- The rest simply rename the `Name` field to `Type`
Decoding types with interface fields:
Types that contain interfaces with multiple implementations implement
their own UnmarshalJSON method. In these cases, we use a JSON decoder
with the `DisallowUnknownFields` option to catch errors during the
deserialization while trying to determine which implementation matches
the data.
Copied tests for copied types are adapted accordingly.
By default, the checkout action checkouts the merge commit. This is different
from what Schutzbot currently does - it runs the test on the PR HEAD commit.
Let's change the GitHub workflows behaviour to the same one as Schutzi
uses.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Fix a bug in the worker job implementation and GCP CLI upload tool,
which causes the code to report wrong error instance in case the image
import failed for some reason.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
By default, a when block is evaluated after an agent is started. I discovered
this randomly: I opened a pipeline and saw that it was stuck on "Prepare EL8
internal 🤔" stage even though the pipeline should have even run it.
This commit fixes it by adding "beforeAgent true" to all when blocks. It
changes the behaviour to more sane "if when is true, allocate an agent".
See https://www.jenkins.io/doc/book/pipeline/syntax/#evaluating-when-before-entering-agent-in-a-stage
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add StorageListObjectsByMetadata() to internal GCP library. The method
allows one to search specific Storage bucket for objects based on
provided metadata.
Extend cloud-cleaner to search for any Storage objects related to the
image import, using custom metadata set on the object. Delete all found
objects.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend StorageObjectUpload() to allow setting custom metadata on the
uploaded object.
Modify worker's osbuild job implementation and GCP CLI upload tool to
set the chosen image name as a custom metadata on the uploaded object.
This will make it possible to connect Storage objects to specific
images.
Add News entry about image name being added as metadata to uploaded GCP
Storage object as part of worker job.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend internal GCP library to allow deleting Compute Node image and
instance. In addition provide function to load service account
credentials file content from the environment.
Change names used for GCP image and instance in `api.sh` integration
test to make them predictable. This is important, so that cloud-cleaner
can identify potentially left over resources and clean them up. Use the
same approach for generating predictable, but run-specific, test ID as
in GenerateCIArtifactName() from internal/test/helpers.go. Use SHA224
to generate a hash from the string, because it can contain characters
not allowed by GCP for resource name (specifically "_" e.g. in "x86_64").
SHA-224 was picked because it generates short enough output and it is
future proof for use in RHEL (unlike MD5 or SHA-1).
Refactor cloud-cleaner to clean up GCP resources and also to run cleanup
for each cloud in a separate goroutine.
Modify run_cloud_cleaner.sh to be able to run in environment in which
AZURE_CREDS is not defined.
Always run cloud-cleaner after integration tests for rhel8, rhel84 and
cs8, which test GCP.
Define DISTRO_CODE for each integration testing stage in Jenkinsfile.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Split the GCP library into multiple files:
- compute.go - code interacting mainly with the Compute Node resources
- storage.go - code interacting mainly with the Cloud Storage resources
- gcp.go - common code (e.g. authentication with GCP)
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The internal GCP library was originally placed into `internal/upload`
directory, since its purpose was mainly to upload and import built
images to GCP.
Functionality for other cloud-provider-specific libraries is broader,
however scattered around the `internal/` directory based on purpose (e.g. in
`internal/boot` and `internal/upload`). Since all parts of provider-specific
library usually share some common pieces (e.g. authentication), it makes
sense to consolidate them into a single package (e.g. in
`internal/cloud/<provider>`).
Create `internal/cloud` directory, where all cloud-provider-specific
internal libraries should be consolidated. Start with GCP.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Most images share a build root, checkpoint the build root to speed up tests
at the potential cost of some disk space (in the cases the build roots are not the same).
Afetr a run, a store contains downloaded rpm's and whatever pipelines
were checkpointed. We want to reuse these for subsequent runs, so
don't delet the store.
This risks minimally increasing the disk space usage, but should speed things up significantly.