Composer does not have 1:1 mapping of what can be the Host Distro name
and the names of supported distributions held in the Distroregistry.
The fact that the host distro `Name()` method as passed to the Weldr API
does not return the same name as what is used as distro name for
repository definitions. This makes it hard to use `distro.Distro` and
`distro.Arch` directly and rely on the values returned by them as their
name.
Add `New*HostDistro()` to all distro definitions, accepting the name
that should be returned by the distro's `Name()` method. This is useful
mainly if the host distro is Beta or Stream variant of the distro.
Change the distroregistry.Registry to contain host distro as a separate
value set when creating it using `New()` function. This value is
returned by `Registry.FromHost()` method. Determining the host distro is
handled by the `NewDefault()` function. Move the distro name mangling to
distroregistry package. Add relevant unit tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend the RepoConfig structure to contain new field ImageTypeTags.
Extend also other structures and functions as needed, to support loading
repository definitions, which use this new field. The idea is that a
repository should be used for building all image types, unless it has
some ImageTypeTags defined. In such case, it should be used only for
building the specific image types, which names are specified in the new
field.
Add RepoRegistry as a higher-level API to load and manage repository
definitions for each distribution. Currently it provides one method,
which returns a set of repositories needed to build a given image
type. The RepoRegistry uses the new ImageTypeTags field in the RepoConfig
structure and returns all the needed repositories for the image type.
Modify rpmmd unit tests and add unit tests for RepoRegistry.
Add News entry describing the change done to RepoConfig and its JSON
representation.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Separate the loading of repo definitions from JSON file from
`LoadRepositories()` to a standalone function
`loadRepositoriesFromFile()`, to make it easy to reuse it in the future.
Add unit tests for `LoadRepositories()` function.
Exclude github.com/osbuild/osbuild-composer/internal/rpmmd/test package
from test coverage. Package with just tests and no other code makes `go
test` to fail. This should be fixed in go 1.17.
See https://github.com/golang/go/issues/27333
Signed-off-by: Tomas Hozza <thozza@redhat.com>
fedoratest was yet another dummy distribution used by unit tests. After
the rework of test_distro, there is no reason to not use it as the only
distro implementation for testing purposes.
Remove fedoratest distro and replace it with test_distro in all affected
tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend the "Test Distro" implementation and definition to contain two
architectures and make the second architecture contain two image types.
Add New2() function returning another "Test Distro".
Modify the `internal/store` unit tests to reflect changes done to the
"Test Distro".
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The parent commit would be added to the sources unconditionally. This
is only necessary for the edge-installer image type.
This doesn't technically change the build behaviour of an existing
distro and image type. It simply avoids unnecessarily downloading an
ostree commit when only the ref is needed.
It does change the 'sources' section of the manifest however.
The single `pipelines()` function is now replaced by multiple functions
for different purposes:
- `edgeCorePipelines()` defines the pipelines that create an edge
commit. This is used by both the `edge-commit` and `edge-container`
images.
- `edgeCommitPipelines()` and `edgeContainerPipelines()` define the
pipelines for `edge-commit` and `edge-container` respectively. They
share the core pipelines but differ in their final pipeline for
assembling the image into either a tarball or a container.
- `edgeInstallerPipeline()` shares almost no common parts with other
pipelines (only the `buildPipeline()`).
The `pipelines` function for each image type is set during creation of
the instance.
Individual pipeline functions are no longer methods of the image type.
Separate function for installer-specific pipelines.
The installer image type is very different compared to the other edge
types (and even more so compared to the more general images), so
separating out the pipelines that are specific to the installer Manifest
makes the whole method a bit more readable.
This function is not "wired" to the main pipelines generation, but will
be when the image type is defined.
Added a helper function to the bootiso stage for setting the BCJ option
for xz compression.
The FSCompression struct is changed to use a pointer for the Options
substruct so it can be omitted when nil (omitempty).
Based on rhel84 with minor changes:
- moved options and customizations checking to its own method on
imageType.
- added global `osVersion = "8.5"` for use in various labels and
metadata files.
- pipelines() function is empty and returns with "not implemented" error
since no image types are defined.
When a users wants to install a package that itself is excluded or its
dependency is excluded, it fails the build. There is no known workaround
for this shorcoming of our current design.
Therefore, remove a package from the list of excluded if it is
explicitly mentioned in a blueprint. This will not solve the issue with
dependencies, but it will create a possibility of a workaround.
Also, introduce regression test to verify the bug fix and hook it into
CentOS CI (this issue was reported against RHEL, but CentOS runs on AWS
so it is better to verify the fix there).
The `cloudbuildResourcesFromBuildLog()` function from the internal GCP
package could cause panic while parsing Build job log which failed early
and didn't create any Compute Engine resources. The function relied on
the `Regexp.FindStringSubmatch()` method to always return a match
while being used on the build log. Accessing a member of a nil slice
would cause a panic in `osbuild-worker`, such as:
Stack trace of thread 185316:
#0 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#1 0x0000564e5391fa1e runtime.sigfwdgo (osbuild-worker)
#2 0x0000564e5391e354 runtime.sigtrampgo (osbuild-worker)
#3 0x0000564e5393b953 runtime.sigtramp (osbuild-worker)
#4 0x00007f37e98e3b20 __restore_rt (libpthread.so.0)
#5 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#6 0x0000564e5391f5ea runtime.crash (osbuild-worker)
#7 0x0000564e53909306 runtime.fatalpanic (osbuild-worker)
#8 0x0000564e53908ca1 runtime.gopanic (osbuild-worker)
#9 0x0000564e53906b65 runtime.goPanicIndex (osbuild-worker)
#10 0x0000564e5420b36e github.com/osbuild/osbuild-composer/internal/cloud/gcp.cloudbuildResourcesFromBuildLog (osbuild-worker)
#11 0x0000564e54209ebb github.com/osbuild/osbuild-composer/internal/cloud/gcp.(*GCP).CloudbuildBuildCleanup (osbuild-worker)
#12 0x0000564e54b05a9b main.(*OSBuildJobImpl).Run (osbuild-worker)
#13 0x0000564e54b08854 main.main (osbuild-worker)
#14 0x0000564e5390b722 runtime.main (osbuild-worker)
#15 0x0000564e53939a11 runtime.goexit (osbuild-worker)
Add a unit test testing this scenario.
Make the `cloudbuildResourcesFromBuildLog()` function more robust and
not blindly expect to find matches in the build log. As a result the
`cloudbuildBuildResources` struct instance returned from the function
may be empty. Subsequently make sure that the `CloudbuildBuildCleanup()`
method handles an empty `cloudbuildBuildResources` instance correctly.
Specifically the `storageCacheDir.bucket` may be an empty string and
thus won't exist. Ensure that this does not result in infinite loop by
checking for `storage.ErrBucketNotExist` while iterating the bucket
objects.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The GCP image import method currently use the Cloud Build API with
Google's Daisy workflow. This workflow creates multiple GCE resources
during its execution. Although the desired Region for the imported image
is specified as a workflow argument, this has no effect on the GCE
Zone used by the workflow for created resources. By default it seems
to default to "us-central1-a" Zone. As a result, there are common cases
of resources being exhausted in the default zone.
Add a method, which translates provided Google Storage Region to a GCE
Region, which is needed mainly for multi and dual Storage Regions.
Add a method, which returns a list of available GCE Zones for a given
GCE Region.
Modify the ComputeImageImport() method to translate the provided Google
Storage Region to list of corresponding GCE Regions. If the provided
Storage Region is not multi or dual Region, then the list contains only
a single item, the provided Region. Then pick a random Region from the
list. Subsequently get available GCE Zones within the Region and pick a
random one for use by the workflow. Specify the GCE Zone to use as a
build step argument.
This change should be completely transparent to the API user.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The `distribution` struct defined in multiple distributions contained
unused `imageTypes` field. Remove it to simplify code.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
By default, `qemu-img convert` creates qcow2 images usable in qemu 1.1 and
newer. RHEL 8 guest images are meant to be bootable on RHEL 6 though.
Unfortunately, RHEL 6 has qemu 0.12, therefore these images cannot be used
there.
To fix this, we need to use the new qcow2_compat option in qemu assembler
to override the default compat version and make qcow2 images that can be used
in qemu 0.10 and newer.
For this, we need osbuild 28 that isn't yet available in of any of
downstreams, therefore we need to pin it everywhere.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
genisoimage might be removed from RHEL 9. The users are advised to switch
to mkisofs tools from the xorriso package. It should be a drop-in replacement.
The same change was recently done by libguestfs:
efb8a766ca2216ab2e32
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add method to fetch Cloudbuild job log.
Add method to parse Cloudbuild job log for created resources. Parsing is
specific to the Image import Cloudbuild job and its logs format. Add
unit tests for the parsing function.
Add method to clean up all resources (instances, disks, storage objects)
after a Cloudbuild job.
Modify the worker osbuild job implementation and also the GCP upload CLI
tool to use the new cleanup method CloudbuildBuildCleanup().
Keep the StorageImageImportCleanup() method, because it is still used by
the cloud-cleaner tool. There is no way for the cloud-cleaner to figure
out the Cloudbuild job ID to be able to call CloudbuildBuildCleanup()
instead.
Add methods to delete Compute instance and disk.
Add method to get Compute instance information. This is useful for
checking if the instance has been already deleted, or whether it still
exists.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify all relevant methods in the internal GCP library to accept
context from the caller.
Modify all places which call the internal GCP library methods to pass
the context.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Validates the ref only when supplied through the API (i.e., doesn't
validate built-in defaults).
Regex matches ostree internal and cockpit-composer UI validation.
Added test case to compose API test.
Replacing repeated calls to u.Parse() with path.Join() on the URL's
path. This method handles certain edge cases differently:
- location not ending in / (http://example.org/repo):
- with the old method, the subsequent parsing of "refs/heads/" would
overwrite the path segment of the original URL, resulting in
http://example.org/refs/heads
- with the new method, "refs/heads" is appended to the location and
a / is added between the two parts if necessary.
- ref begins with / (location: http://example.org/repo/, ref: /ref):
- with the old method, the final parsing of ref would overwrite the
path segment of the URL, resulting in http://example.org/ref
- with the new method, the ref is appended and a / is added between
parts where necessary (same as above).
- ref is a full URL
(location: http://example.org/repo/, ref: http://example.com):
- with the old method, u.Parse(ref) would completely overwrite the
existing URL in u.
- with the new method, the ref is added as a sanitised URL path
resulting in http://example.org/refs/heads/http:/example.com.
The last one will probably result in an error in either case, but it's
probably less incorrect to coerce the ref argument into a path.
The response status code of the GET request is checked as well to
provide an appropriate error message if it is not 200 (OK).
If the data in the response is not a valid hex string, the error message
from the DecodeString() method isn't returned directly and it is
replaced by a more useful message. The original error message is
discarded.
This adds support for Packages to the store's json structures so that
they will be preserved across restarts of the osbuild-composer service.
Reading the old format will result in an empty []rpmmd.PackageSpec so
this does not require a composeV1 structure.
This adjusts current tests to account for the new struct member, and
tests osbuild-composer with empty results (eg. existing system will not
have this stored) and with the sets populated by test data.
This adds the compose's dependency list which was previously missing
from the osbuild-composer implementation of the WELDR API.
The dependencies used for the compose are saved, at compose time, in the
store. They are returned as part of the compose/info results, the 'deps'
field.
This adds a list of the depsolved packages to the store's Compose
struct. It is indexed by compose UUID and contains a list of
PackageSpecs that were used to construct the compose. This can assist in
auditing of the composes, or be used to duplicate the compose.
Previously, the Cloud API endpoint `/v1/compose/{id}` return value's
`image_status.status` for a running worker job was "running", which didn't
comply with the Cloud API specification. Equivalents allowed by the API
specification are "building", "uploading" or "registering".
As a result, the Image Builder API also does not comply in this regard
to its specification, because it currently just copies the status value
string returned by osbuild-composer.
Define the `image_status.status` as a reusable type in the Cloud API
specification. This forces openapi to generate an explicit type for it,
which can be then explicitly used in the code, instead of plain strings.
Return "building", instead of "running" for running compose.
Modify api integration test to check for all valid `image_status.status`
values for a compose.
Add News entry explaining this change.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
A result from manifest v2 contains logs from pipelines. The individual
pipelines are stored as an object, thus they have no order. Well, at least
in Go, because it doesn't guarantee one when parsing maps, see:
https://github.com/golang/go/issues/27179
Unfortunately, this makes the Result.fromV2 method return unpredictable
results because the pipeline results are processed in basically a random
order.
This caused the TestUnmarshalV2Failure test (result_test.go:124) to randomly
fail because it expects the ordering to be stable. I decided to fix this by
ordering the pipeline results by their name. When this fix is applied, the
output from Result.fromV2 is well-defined.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Detecting if a result is in the v1 format using DisallowUnknownFields was
clever. However, the format of v1 result changed in the worst way at some
point in the past: A field (id) got added. Therefore, this was an unknown
field to osbuild-composer and it fall back to decoding the input as a v2
result which led to empty logs.
This commit reimplements the result schema check: The new format always
have a top-level type field, whereas the old format doesn't have it. The
new implementation uses this fact to distinguish between v1 and v2 format
of a result.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
When a stage is successful in a manifest v2, the success field is omitted from
the result. In other words, the default value of the success field is true
which is against the default value of boolean in Go. This commit implements
a workaround.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Bug fix for changes introduced in #1244.
The new image types, rhel-edge-container and rhel-edge-installer, would
ignore the user-supplied ostree ref and use the default everywhere.
The default should only be used when a ref is not specified, which the
weldr API takes care of before calling the Manifest() method.
User home directories don't survive the rpm-ostree stage. They are
converted to systemd-tmpfiles via rpm-ostree post-process, but the
contents are left behind, so any keys we add to the authorized_keys file
will be gone.
This stage sets up a first-boot service that writes the user's public
key to the file in the home directory during the first system boot.
In the Anaconda pipeline, the kickstart stage should fetch the commit
we're embedding. It was mistakenly trying to fetch from the URL used to
build the image instead.