Create a template to help us with the bug reporting process. This template includes a request for information we usually ask from the reporters. This way, they can include the information upfront.
Create an entry point for all regression test called "regression.sh" and
run it as part of the base tests for all our distros. This entry
point contains logic for running only the test cases that are
appropriate for a given distribution.
cloud-init was enabled explicitly in the image-factory kickstart and thus we
need to explicitly enable it too.
Fixes: rhbz#1960309
Fixes: COMPOSER-920
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
All our images now support and have enabled cloud-init, there's no need
to explicitly enable it and install it in a kickstart.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
VHD images are meant for Azure and we indeed test if they're bootable in
test/cases/azure.sh . There's no reason to test them using libvirt anymore
so this commit just removes the test.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Modify composer to use RepoRegistry, instead of loading the host
repositories, when initializing WeldrAPI.
Modify WeldrAPI to use RepoRegistry, instead of a map of repository
definitions. Make sure that the RepoRegistry method specific to image
type is used in Welder where appropriate. Specifically when depsolving a
Blueprint, which is used to build a specific image type. Update Weldr
API unit tests to reflect the change.
Add a new method to RepoRegistry, allowing to get list of repositories,
which should be used for building an image for a given architecture,
without specifying the exact image type. Add relevant unit tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Composer does not have 1:1 mapping of what can be the Host Distro name
and the names of supported distributions held in the Distroregistry.
The fact that the host distro `Name()` method as passed to the Weldr API
does not return the same name as what is used as distro name for
repository definitions. This makes it hard to use `distro.Distro` and
`distro.Arch` directly and rely on the values returned by them as their
name.
Add `New*HostDistro()` to all distro definitions, accepting the name
that should be returned by the distro's `Name()` method. This is useful
mainly if the host distro is Beta or Stream variant of the distro.
Change the distroregistry.Registry to contain host distro as a separate
value set when creating it using `New()` function. This value is
returned by `Registry.FromHost()` method. Determining the host distro is
handled by the `NewDefault()` function. Move the distro name mangling to
distroregistry package. Add relevant unit tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend the RepoConfig structure to contain new field ImageTypeTags.
Extend also other structures and functions as needed, to support loading
repository definitions, which use this new field. The idea is that a
repository should be used for building all image types, unless it has
some ImageTypeTags defined. In such case, it should be used only for
building the specific image types, which names are specified in the new
field.
Add RepoRegistry as a higher-level API to load and manage repository
definitions for each distribution. Currently it provides one method,
which returns a set of repositories needed to build a given image
type. The RepoRegistry uses the new ImageTypeTags field in the RepoConfig
structure and returns all the needed repositories for the image type.
Modify rpmmd unit tests and add unit tests for RepoRegistry.
Add News entry describing the change done to RepoConfig and its JSON
representation.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Separate the loading of repo definitions from JSON file from
`LoadRepositories()` to a standalone function
`loadRepositoriesFromFile()`, to make it easy to reuse it in the future.
Add unit tests for `LoadRepositories()` function.
Exclude github.com/osbuild/osbuild-composer/internal/rpmmd/test package
from test coverage. Package with just tests and no other code makes `go
test` to fail. This should be fixed in go 1.17.
See https://github.com/golang/go/issues/27333
Signed-off-by: Tomas Hozza <thozza@redhat.com>
fedoratest was yet another dummy distribution used by unit tests. After
the rework of test_distro, there is no reason to not use it as the only
distro implementation for testing purposes.
Remove fedoratest distro and replace it with test_distro in all affected
tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend the "Test Distro" implementation and definition to contain two
architectures and make the second architecture contain two image types.
Add New2() function returning another "Test Distro".
Modify the `internal/store` unit tests to reflect changes done to the
"Test Distro".
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The parent commit would be added to the sources unconditionally. This
is only necessary for the edge-installer image type.
This doesn't technically change the build behaviour of an existing
distro and image type. It simply avoids unnecessarily downloading an
ostree commit when only the ref is needed.
It does change the 'sources' section of the manifest however.
The single `pipelines()` function is now replaced by multiple functions
for different purposes:
- `edgeCorePipelines()` defines the pipelines that create an edge
commit. This is used by both the `edge-commit` and `edge-container`
images.
- `edgeCommitPipelines()` and `edgeContainerPipelines()` define the
pipelines for `edge-commit` and `edge-container` respectively. They
share the core pipelines but differ in their final pipeline for
assembling the image into either a tarball or a container.
- `edgeInstallerPipeline()` shares almost no common parts with other
pipelines (only the `buildPipeline()`).
The `pipelines` function for each image type is set during creation of
the instance.
Individual pipeline functions are no longer methods of the image type.
Separate function for installer-specific pipelines.
The installer image type is very different compared to the other edge
types (and even more so compared to the more general images), so
separating out the pipelines that are specific to the installer Manifest
makes the whole method a bit more readable.
This function is not "wired" to the main pipelines generation, but will
be when the image type is defined.
Added a helper function to the bootiso stage for setting the BCJ option
for xz compression.
The FSCompression struct is changed to use a pointer for the Options
substruct so it can be omitted when nil (omitempty).
Based on rhel84 with minor changes:
- moved options and customizations checking to its own method on
imageType.
- added global `osVersion = "8.5"` for use in various labels and
metadata files.
- pipelines() function is empty and returns with "not implemented" error
since no image types are defined.
Not proud of the fix but it should work for now. See the comment in the spec
file for more information and also the upstream PR for more context:
https://github.com/getkin/kin-openapi/pull/351
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
When a users wants to install a package that itself is excluded or its
dependency is excluded, it fails the build. There is no known workaround
for this shorcoming of our current design.
Therefore, remove a package from the list of excluded if it is
explicitly mentioned in a blueprint. This will not solve the issue with
dependencies, but it will create a possibility of a workaround.
Also, introduce regression test to verify the bug fix and hook it into
CentOS CI (this issue was reported against RHEL, but CentOS runs on AWS
so it is better to verify the fix there).
This uses an image created and uploaded to Azure using composer-cli
and then terraform to spin up a linux vm from that image, check
if the machine works and then cleans up everything.
The `cloudbuildResourcesFromBuildLog()` function from the internal GCP
package could cause panic while parsing Build job log which failed early
and didn't create any Compute Engine resources. The function relied on
the `Regexp.FindStringSubmatch()` method to always return a match
while being used on the build log. Accessing a member of a nil slice
would cause a panic in `osbuild-worker`, such as:
Stack trace of thread 185316:
#0 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#1 0x0000564e5391fa1e runtime.sigfwdgo (osbuild-worker)
#2 0x0000564e5391e354 runtime.sigtrampgo (osbuild-worker)
#3 0x0000564e5393b953 runtime.sigtramp (osbuild-worker)
#4 0x00007f37e98e3b20 __restore_rt (libpthread.so.0)
#5 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#6 0x0000564e5391f5ea runtime.crash (osbuild-worker)
#7 0x0000564e53909306 runtime.fatalpanic (osbuild-worker)
#8 0x0000564e53908ca1 runtime.gopanic (osbuild-worker)
#9 0x0000564e53906b65 runtime.goPanicIndex (osbuild-worker)
#10 0x0000564e5420b36e github.com/osbuild/osbuild-composer/internal/cloud/gcp.cloudbuildResourcesFromBuildLog (osbuild-worker)
#11 0x0000564e54209ebb github.com/osbuild/osbuild-composer/internal/cloud/gcp.(*GCP).CloudbuildBuildCleanup (osbuild-worker)
#12 0x0000564e54b05a9b main.(*OSBuildJobImpl).Run (osbuild-worker)
#13 0x0000564e54b08854 main.main (osbuild-worker)
#14 0x0000564e5390b722 runtime.main (osbuild-worker)
#15 0x0000564e53939a11 runtime.goexit (osbuild-worker)
Add a unit test testing this scenario.
Make the `cloudbuildResourcesFromBuildLog()` function more robust and
not blindly expect to find matches in the build log. As a result the
`cloudbuildBuildResources` struct instance returned from the function
may be empty. Subsequently make sure that the `CloudbuildBuildCleanup()`
method handles an empty `cloudbuildBuildResources` instance correctly.
Specifically the `storageCacheDir.bucket` may be an empty string and
thus won't exist. Ensure that this does not result in infinite loop by
checking for `storage.ErrBucketNotExist` while iterating the bucket
objects.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The GCP image import method currently use the Cloud Build API with
Google's Daisy workflow. This workflow creates multiple GCE resources
during its execution. Although the desired Region for the imported image
is specified as a workflow argument, this has no effect on the GCE
Zone used by the workflow for created resources. By default it seems
to default to "us-central1-a" Zone. As a result, there are common cases
of resources being exhausted in the default zone.
Add a method, which translates provided Google Storage Region to a GCE
Region, which is needed mainly for multi and dual Storage Regions.
Add a method, which returns a list of available GCE Zones for a given
GCE Region.
Modify the ComputeImageImport() method to translate the provided Google
Storage Region to list of corresponding GCE Regions. If the provided
Storage Region is not multi or dual Region, then the list contains only
a single item, the provided Region. Then pick a random Region from the
list. Subsequently get available GCE Zones within the Region and pick a
random one for use by the workflow. Specify the GCE Zone to use as a
build step argument.
This change should be completely transparent to the API user.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
After change to image-info [1], which shows the format version for qcow2
images, the `image-format` changed from string to a dictionary. However
the `open_image()` function still compares it with string. This causes
`raw` images to be converted by the script again to `raw` format. This
change fixes the issue, so that `raw` images are not converted, but used
as they are.
[1] 5937b9adca
Signed-off-by: Tomas Hozza <thozza@redhat.com>