Convert some of the fields in the `RepoConfig` struct
to pointers. Since `RepoConfig` will be used to convert
custom repositories to an array of `osbuild.YumRepository`,
we need to ensure that fields that are not set explicitly
are not saved to the `/etc/yum.repos.d` repository files.
Update the internal RepoConfig object to
accept a slice of baseurls rather than a
single field. This change was needed to
align RepoConfig with the dnf spec [1].
Additionally, this change adds custom json
marshal and unmarshal functions to ensure
backwards compatibility with older workers.
Add json tags to the internal rpmmd config
since this is serialized in dnfjson.
Add unit tests to check the serialization
is okay.
[1] See dnf.config
Add the ListDigest to the container Spec struct and all its copies so we
can store list digests when they are available and pass them on to the
appropriate osbuild stages, sources, and inputs.
Copy the value whenever a spec is moved to a different representation.
This adds a function, CleanupOldCacheDirs, that checks the dirs under
/var/cache/osbuild-composer/rpmmd/ and removes files and directories
that don't match the current list of supported distros.
This will clean up the cache from old releases as the are retired, and
will also cleanup the old top level cache directory structure after an
upgrade.
NOTE: This function does not return errors, any real problems it
encounters will also be caught by the cache initialization code and
handled there.
This causes dnf-json to use separate caches, allowing them to run in
parallel, with one lock per distribution. Multiple depsolves with the
same distribution in the blueprint will continue to be serial.
These RequiredSizes are a map that is passed on to the partition table
logic which had hardcoded defaults. This makes it possible to define
either no RequiredSizes (`nil`) or empty RequiredSizes which means no
further constraint checks or partition resizes will be done.
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:
ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp
All of the above are a simple name change, the function arguments and
results are exactly the same as before.
ioutil.ReadDir -> os.ReadDir
now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.
These were identified by running:
golangci-lint run --build-tags=integration ./...
The default number of threads (16) is OK for general use case. However,
we are being asked by RH IT to lower the number of threads when
uploading the image to Azure using proxy server.
Make the number of threads configurable in the worker configuration and
default to the currently used value if it is not provided.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The bug wasn't caught because the PackageSets field of the repository
wasn't being copied after parsing the compose request for the test
manifest.
This should now catch future occurrences of this bug.
I use this tool quite a lot and I often want to use the CDN content, so
I would very much appreciate RHSM support. :)
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
DNF supports more than one GPG key. It is possible that one may be used for
signing packages, and another to sign the repository metadata. This
renamed GPGKey to GPGKeys internally. It does not change the on-disk
repository json format.
Image types no longer report their chains. Instead, pipelines report
their packages and chains and blueprint packages are added to the
workload.
The distro.ImageType interface retains the PackageSetsChains() methods
for RHEL 7 until that is rewritten as well.
The osbuild-dnf-json-test doesn't use the PackageSetsChains() method
anymore. Instead, since it only test the centos-8 qcow2 image, it
hardcodes the expected package set names.
Fedora 35 support was dropped, so we can update to a newer Go.
Stable RHEL 8 and 9 and Fedora 36 ships Go 1.18, so let's switch to it.
"//go:build" directives are now apparently enforced by go fmt, so that's why
there were added.
Also, all the github actions were adjusted to use Go 1.18.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Fedora 35 is going EOL on Tue 2022-12-13. At the time of writing this commit
message, that's the next day. As we do releases on Wednesdays, the next
release will never find its way to F35 and thus, there's no point in keeping
support for it.
Let's delete everything that relates to Fedora 35. If there's something that
cannot be deleted (e.g. CI containers based on F35), let's upgrade it to F37.
TestCrossArchDepsolve now uses CentOS Stream 8 because RHEL 8.4 cannot read
F37 repository metadata. This is a similar issue to
https://bugzilla.redhat.com/show_bug.cgi?id=2004853 . Basically, newer
repositories can be only read by libmodulemd >= 2.11.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Was missing the package sets key from the repo config struct, which
means that the option was being ignored and wasn't being serialised into
the test manifest either.
Add RHSM fact to image options when generating test manifests.
We add the value "test-manifest" to the API type to indicate it's a test
manifest. This should never be registered and therefore shouldn't show
up in our data, but it's useful to detect changes and regressions in the
fact creation in the pipelines.
Change the default output directory to the one in the repo.
Originally it was set to a different directory to avoid overwriting the
manifests that had image-info, but those are long gone.
When the store is written to disk it simplifies the ImageBuild details
into a simple image type string. This works fine for composes that match
the host's distro but isn't enough detail to load composes made for
other distros, especially if the image type name isn't supported on the
host. This results in cross distro compose results being lost after a
reboot.
This fix uses the distro information from the compose's blueprint to
determine which distro the image type should be loaded from. It assumes
that the architecture matches the hosts' arch -- this is currently
always true but in the future if cross-arch builds are added it will
need to be addressed in a different way.
newComposeFromV0, newComposesFromV0, and newStoreFromV0 now take a
pointer to the full distro registry instead of an Arch, this allows them
to access the correct image types for the distro selected by the
blueprint. When loading the composes from disk the blueprint distro is
loaded from the registry before checking the image type string.
This means that we do not have to change the store version or on disk
format, the only thing changing is how it decides to populate the
ImageBuild when reloading the store.
A number of tests use a fake test distro using fake architecture names.
These tests have been adjusted to use a fake distro registry with
overridden host architecture that matches the fake one.
If a job is unresponsive the worker has most likely crashed or been shut
down and the in-progress job been lost.
Instead of failing these jobs, requeue them up to two times. Once a job is lost
a third time it fails. This avoids infinite loops.
This is implemented by extending FinishJob to RequeuOrFinish job. It takes a
max number of requeues as an argument, and if that is 0, it has the same
behavior as FinishJob used to have.
If the maximum number of requeues has not yet been reached, then the running
job is returned to pending state to be picked up again.
Extend the worker's configuration to allow setting GCP Bucket to use
when uploading images to GCP. The value from the configuration is used
only if not provided in the TargetOptions of the job.
In GCP, the region of the bucket does not limit importing of the image
to a particular region. So it is completely possible to use a single
Bucket to import images to any and all regions.
Return an error in case no bucket name was set in the job nor in the
worker configuration.
Previously, the internal `OSBuildJobImpl` structure defined only
`GCPCreds` member. This is not practical, once there will be more
than one GCP-related variable.
Define a new `GCPConfiguration` structure, move the credentials variable
into it and use it in `OSBuildJobImpl` instead.
There is a desire to make the worker as "dumb" as possible. Therefore it
is not desired to generate the AWS object key names in the worker if it
was not provided in the job.
Modify the worker code to not generate the AWS object key in any case
and instead set an error in case the object key was not provided.
Modify Weldr API implementation to generate the object key, if it was
not provided by the user. This is consistent with Cloud API
implementation.
Flip the logic when deciding if to use the Bucket from the job or worker
configuration. Previously, the Bucket from the worker configuration was
always preferred if it was set, even if it was provided in the job
itself. This made it impossible to override the configuration.
Change the logic to use the Bucket from the worker configuration only if
it was not set in the job.
Report an error if no bucket name was provided with the job and there is
also none specified in the configuration.
Instead of using the ostree.RequestParams in the OSTReeImageOptions,
define a new struct specific to ImageOptions for the ostree parameters.
This is almost identical to the new ostree.CommitSpec but the meaning of
the parameters changes based on image type and it would not be clear if
the CommitSpec was used in all cases. For example, the parameters of
the new OSTreeImageOptions do not always refer to the same commit. The
URL and Checksum may point to a parent commit to be pulled in to base
the new commit on, while the Ref refers to the new commit that will be
built (which may have a different ref from the parent).
The ostree.ResolveParams() function now returns two strings, the
resolved ref, which is replaced by the defaultRef if it's not specified
in the request, and the resolved parent checksum if a URL is specified.
The URL does not need to be returned since it's always the same as the
one specified in the request.
The function has been rewritten to make the logic more clear.
The docstring for the function has been rewritten to cover all use cases
and error conditions.
Currently errors like clienterror 28 ("at least one target failed") have
all the relevant information in the details, don't omit these details
from the worker logs.
If the object is marked as public, its direct download URL will be returned
instead of the presigned one.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
By setting the object's ACL to "public-read", anyone can download the object
even without authenticating with AWS.
The osbuild-upload-generic-s3 command got a new -public argument that
uses this new feature.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>