The parent commit would be added to the sources unconditionally. This
is only necessary for the edge-installer image type.
This doesn't technically change the build behaviour of an existing
distro and image type. It simply avoids unnecessarily downloading an
ostree commit when only the ref is needed.
It does change the 'sources' section of the manifest however.
The single `pipelines()` function is now replaced by multiple functions
for different purposes:
- `edgeCorePipelines()` defines the pipelines that create an edge
commit. This is used by both the `edge-commit` and `edge-container`
images.
- `edgeCommitPipelines()` and `edgeContainerPipelines()` define the
pipelines for `edge-commit` and `edge-container` respectively. They
share the core pipelines but differ in their final pipeline for
assembling the image into either a tarball or a container.
- `edgeInstallerPipeline()` shares almost no common parts with other
pipelines (only the `buildPipeline()`).
The `pipelines` function for each image type is set during creation of
the instance.
Individual pipeline functions are no longer methods of the image type.
Separate function for installer-specific pipelines.
The installer image type is very different compared to the other edge
types (and even more so compared to the more general images), so
separating out the pipelines that are specific to the installer Manifest
makes the whole method a bit more readable.
This function is not "wired" to the main pipelines generation, but will
be when the image type is defined.
Added a helper function to the bootiso stage for setting the BCJ option
for xz compression.
The FSCompression struct is changed to use a pointer for the Options
substruct so it can be omitted when nil (omitempty).
Based on rhel84 with minor changes:
- moved options and customizations checking to its own method on
imageType.
- added global `osVersion = "8.5"` for use in various labels and
metadata files.
- pipelines() function is empty and returns with "not implemented" error
since no image types are defined.
Not proud of the fix but it should work for now. See the comment in the spec
file for more information and also the upstream PR for more context:
https://github.com/getkin/kin-openapi/pull/351
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
When a users wants to install a package that itself is excluded or its
dependency is excluded, it fails the build. There is no known workaround
for this shorcoming of our current design.
Therefore, remove a package from the list of excluded if it is
explicitly mentioned in a blueprint. This will not solve the issue with
dependencies, but it will create a possibility of a workaround.
Also, introduce regression test to verify the bug fix and hook it into
CentOS CI (this issue was reported against RHEL, but CentOS runs on AWS
so it is better to verify the fix there).
This uses an image created and uploaded to Azure using composer-cli
and then terraform to spin up a linux vm from that image, check
if the machine works and then cleans up everything.
The `cloudbuildResourcesFromBuildLog()` function from the internal GCP
package could cause panic while parsing Build job log which failed early
and didn't create any Compute Engine resources. The function relied on
the `Regexp.FindStringSubmatch()` method to always return a match
while being used on the build log. Accessing a member of a nil slice
would cause a panic in `osbuild-worker`, such as:
Stack trace of thread 185316:
#0 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#1 0x0000564e5391fa1e runtime.sigfwdgo (osbuild-worker)
#2 0x0000564e5391e354 runtime.sigtrampgo (osbuild-worker)
#3 0x0000564e5393b953 runtime.sigtramp (osbuild-worker)
#4 0x00007f37e98e3b20 __restore_rt (libpthread.so.0)
#5 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#6 0x0000564e5391f5ea runtime.crash (osbuild-worker)
#7 0x0000564e53909306 runtime.fatalpanic (osbuild-worker)
#8 0x0000564e53908ca1 runtime.gopanic (osbuild-worker)
#9 0x0000564e53906b65 runtime.goPanicIndex (osbuild-worker)
#10 0x0000564e5420b36e github.com/osbuild/osbuild-composer/internal/cloud/gcp.cloudbuildResourcesFromBuildLog (osbuild-worker)
#11 0x0000564e54209ebb github.com/osbuild/osbuild-composer/internal/cloud/gcp.(*GCP).CloudbuildBuildCleanup (osbuild-worker)
#12 0x0000564e54b05a9b main.(*OSBuildJobImpl).Run (osbuild-worker)
#13 0x0000564e54b08854 main.main (osbuild-worker)
#14 0x0000564e5390b722 runtime.main (osbuild-worker)
#15 0x0000564e53939a11 runtime.goexit (osbuild-worker)
Add a unit test testing this scenario.
Make the `cloudbuildResourcesFromBuildLog()` function more robust and
not blindly expect to find matches in the build log. As a result the
`cloudbuildBuildResources` struct instance returned from the function
may be empty. Subsequently make sure that the `CloudbuildBuildCleanup()`
method handles an empty `cloudbuildBuildResources` instance correctly.
Specifically the `storageCacheDir.bucket` may be an empty string and
thus won't exist. Ensure that this does not result in infinite loop by
checking for `storage.ErrBucketNotExist` while iterating the bucket
objects.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The GCP image import method currently use the Cloud Build API with
Google's Daisy workflow. This workflow creates multiple GCE resources
during its execution. Although the desired Region for the imported image
is specified as a workflow argument, this has no effect on the GCE
Zone used by the workflow for created resources. By default it seems
to default to "us-central1-a" Zone. As a result, there are common cases
of resources being exhausted in the default zone.
Add a method, which translates provided Google Storage Region to a GCE
Region, which is needed mainly for multi and dual Storage Regions.
Add a method, which returns a list of available GCE Zones for a given
GCE Region.
Modify the ComputeImageImport() method to translate the provided Google
Storage Region to list of corresponding GCE Regions. If the provided
Storage Region is not multi or dual Region, then the list contains only
a single item, the provided Region. Then pick a random Region from the
list. Subsequently get available GCE Zones within the Region and pick a
random one for use by the workflow. Specify the GCE Zone to use as a
build step argument.
This change should be completely transparent to the API user.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
After change to image-info [1], which shows the format version for qcow2
images, the `image-format` changed from string to a dictionary. However
the `open_image()` function still compares it with string. This causes
`raw` images to be converted by the script again to `raw` format. This
change fixes the issue, so that `raw` images are not converted, but used
as they are.
[1] 5937b9adca
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Update ostree-ng.sh to install and run ostree commit on UEFI VM
ostree.sh keeps on BIOS VM. Both of BIOS and UEFI are covered.
check_ostree.yaml ansible playbook has to be updated to support
both BISO and UEFI
s3cmd sync actually downloads metadata for all objects in a s3 bucket.
We have built a lot of RPMs, thus this takes 5 minutes on AWS and 25 minutes
on my laptop (!!!).
Let's use recursive put instead. This doesn't delete any files on the remote
side. As we upload RPMs only once, this also shouldn't fail on "the
object already exists". Using this method, we should be able to upload the
RPMs in seconds.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
sudo journalctl -af -n 1 -u "${WORKER_UNIT}" &
WORKER_JOURNAL_PID=$!
In this snippet, WORKER_JOURNAL_PID is set to the PID of the sudo process.
Sudo doesn't propagate any signals - therefore the child process of sudo
(journalctl in this case) isn't killed when a signal is sent to the parent.
Use pkill -P instead which kills all processes where sudo is the parent.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
GitLab CI builds its own rpms and thus it must be use a different path.
This commit modifies mockbuild.sh and deploy.sh to be able to add an
extra path segment into the path so GitLab can use a different path.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
According to Ben's suggestion, test has been updated to aligin with
customer scenario.
1. Setup ostree prod repo, building installer and edge upgrade
will be from prod repo
2. Containers for building installer and edge upgrade will be
running as stage repo
3. Before edge system update, prod repo will pull update content
from stage repo, make static-delta and summary
The `distribution` struct defined in multiple distributions contained
unused `imageTypes` field. Remove it to simplify code.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
By default, `qemu-img convert` creates qcow2 images usable in qemu 1.1 and
newer. RHEL 8 guest images are meant to be bootable on RHEL 6 though.
Unfortunately, RHEL 6 has qemu 0.12, therefore these images cannot be used
there.
To fix this, we need to use the new qcow2_compat option in qemu assembler
to override the default compat version and make qcow2 images that can be used
in qemu 0.10 and newer.
For this, we need osbuild 28 that isn't yet available in of any of
downstreams, therefore we need to pin it everywhere.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Change the "image-format" from a string to a dict, with a "type":
$value entry, where $value contains the previous plain string
data.
Additionally, include the qcow2 format version, if the given
image is indeed a qcow2.
Adapt all manifest test accordingly (partly done by Ondřej)
Python 3 script used for conversion of manifest tests:
import os
import json
for name in os.listdir(os.getcwd()):
if not name.endswith(".json"):
continue
print(name)
with open(name, "r") as old:
data = json.load(old)
info = data.get("image-info", {})
format = info.get("image-format")
if not format:
continue
info["image-format"] = {
"type": format
}
if format != "qcow2":
continue
info["image-format"]["compat"] = "1.1"
with open(name + ".new", "w") as new:
json.dump(data, new, indent=2)
new.write("\n")
new.flush()
os.rename(name+".new", name)
test: use the new image-info format in all test manifests
The previous commit converted only qcow2 and openstack manifests but this change
is actually needed for all manifests produced by the qemu assembler.
Co-Developed-by: Ondřej Budai <ondrej@budai.cz>