Instead of using the ostree.RequestParams in the OSTReeImageOptions,
define a new struct specific to ImageOptions for the ostree parameters.
This is almost identical to the new ostree.CommitSpec but the meaning of
the parameters changes based on image type and it would not be clear if
the CommitSpec was used in all cases. For example, the parameters of
the new OSTreeImageOptions do not always refer to the same commit. The
URL and Checksum may point to a parent commit to be pulled in to base
the new commit on, while the Ref refers to the new commit that will be
built (which may have a different ref from the parent).
The ostree.ResolveParams() function now returns two strings, the
resolved ref, which is replaced by the defaultRef if it's not specified
in the request, and the resolved parent checksum if a URL is specified.
The URL does not need to be returned since it's always the same as the
one specified in the request.
The function has been rewritten to make the logic more clear.
The docstring for the function has been rewritten to cover all use cases
and error conditions.
This is the first step to support embedding container images. Here
we add the `containers []container.Spec` argument to supply images
with resolved container specifications. For now all distros will
return an error in case a container is actually supplied since none
of them currently support embedding containers. NB: also no apis or
tools will actually resolve containers.
The package sets for an image can depend on the blueprint, and
by the same logic there is no reason it should not be able to
depend on the image options.
This is so far a non-functional change, but makes a follow-up
commit simpler (though still without actually depending on
the image options to compute the package sets).
THe `rpmmd.RepoConfig` configuration supports setting "package sets"
for each repository, which allows the associate the individual repos
to specific package sets. Add a new `package_set` option to the
repo configuration of the compose request so that this feature can
be used.
Added CleanCache() method to the solver that deletes all the caches if
the total size grows above a certain (configurable) limit
(default: 500 MiB).
The function is called externally to handle errors (usually log or
ignore completely) and to avoid calling multiple times for multiple
depsolves of a single request.
The cleanup is extremely simple and is meant as a placeholder for more
sophisticated cache management. The goal is to simply avoid ballooning
cache sizes that might cause issues for users or our own services.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.
Move package set chain collation to the distro package and add
repositories to the package sets while returning the package sets from
their source, i.e., the ImageType.PackageSets() method.
This also removes the concept of "base repositories". There are no
longer repositories that are added implicitly to all package sets but
instead each package set needs to specify *all* the repositories it will
be depsolved against.
This paves the way for the requirement we have for building RHEL 7
images with a RHEL 8 build root. The build root package set has to be
depsolved against RHEL 8 repositories without any "base repos" included.
This is now possible since package sets and repositories are explicitly
associated from the start and there is no implicit global repository
set.
The change requires adding a list of PackageSet names to the core
rpmmd.RepoConfig. In the cloud API, repositories that are limited to
specific package sets already contain the correct package set names and
these are now copied to the internal RepoConfig when converting types in
genRepoConfig().
The user-specified repositories are only associated with the payload
package sets like before.
Attach the repository configurations that are specific to a package set
directly on the PackageSet object. This simplifies the Depsolve()
signature and avoids requiring a `nil` when no additional repositories
are required. More importantly, it makes associating repositories to
package sets explicit, no longer relying on matching array indices or
map keys.
Remove the single Depsolve function from the dnfjson package and the
depsolve command from the dnf-json tool. The new ChainDepsolve
functions and chain-depsolve command can handle single depsolves in the
same way so there's no need to keep (and have to maintain) two versions
of very similar code.
The ChainDepsolve function (in Go) and chain-depsolve command (in
Python) have been renamed to plain Depsolve and depsolve respectively,
since they are now general purpose depsolve functions.
Search for (and set) the path for dnf-json by checking a few known
locations:
- ./dnf-json: for situations when the tool is ran from the source tree.
This is checked first to prioritise local changes.
- /usr/libexec/osbuild-composer/dnf-json: the default install location
of the script when osbuild-composer is installed.
- /usr/lib/osbuild-composer/dnf-json: the default install location of
the script for distributions which don't use /usr/libexec.
The function panics with an informative error message when it fails to
find dnf-json.
All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
Almost all repo configurations used for generating image test cases
using `osbuild-pipeline` have `name` defined. Make sure that the repo
name provided in the compose request is used when depsolving.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The service is started via systemd activation sockets.
The service serves http POST requests, the same json as before is
expected as the body of the request, and the same json as before is sent
as the response of the request.
The problem: osbuild-composer used to have a rather uncomplete logic for
selecting client certificates and keys while fetching data from
repositories that use the "subscription model". In this scenario, every
repo requires the user to use a client-side TLS certificate. The problem
is that every repo can use its own CA and require a different pair of
a certificate and a key. This case wasn't handled at all in composer.
Furthermore, osbuild-composer can use remote workers which complicates
things even more.
Assumptions: The problem outlined above is hard to solve in the general
case, but Red Hat Subscription Manager places certain limitations on how
subscriptions might be used. For example, a subscription must be tight to
a host system, so there is no way to use such a repository in osbuild-composer
without it being available on the host system as well.
Also, if a user wishes to use a certain repository in osbuild-composer it
must be available on both hosts: the composer and the worker. It will come
with different pair of a client certificate and a key but otherwise, its
configuration remains the same.
The solution: Expect all the subscriptions to be registered in the
/etc/yum.repos.d/redhat.repo file. Read the mapping of URLs to certificates
and keys from there and use it. Don't change the manifest format and let
osbuild guess the appropriate subscription to use.
The same test is run in distro/distro_test.go. The redundancy was probably
caused by a bitrot in several commits.
I decided to remove the test from distro implementations to reduce the amount
of duplicated code.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This commit adds NewDefault() method to distroregistry that returns a slice
with all distributions supported by osbuild-composer. This way, there's only
one place where a distribution needs to be defined while its support
is being added to composer.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
My goal is to add a method to distroregistry to return Registry with
all supported distributions. This way, all supported distributions
would be defined only on one place.
To achieve this, the Registry must live outside the distro package
because the distro implementation depends on it and this would create
a circular dependency unsupported by Go.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This replaces Packages() and BuildPackages() by returning a map of
package sets, the semantics of which is up to the distro to define.
They are meant to be depsolved and the result returned back as a
map to Manifest(), with the same keys.
No functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The image definition is shared with the latest RHEL 8.y one (8.4 currently).
I expect that we the introduction of 8.5 support, we point the centos 8
distro at it.
The test repositories and manifests use the official CentOS composes. From
what I can tell, they are persistent. This is not guaranteed though, so we
might need to switch to RPMRepo at some point.
The "classic" CentOS 8 should also be buildable but due to the chicken and egg
issue (this commit will get into Centos "8.4" but Centos "8.4" isn't a thing
yet), we cannot test it and therefore it might be broken.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Imagine this situation: You have a RHEL system booted from an image produced
by osbuild-composer. On this system, you want to use osbuild-composer to
create another image of RHEL.
However, there's currently something funny with partitions:
All RHEL images built by osbuild-composer contain a root xfs partition. The
interesting bit is that they all share the same xfs partition UUID. This might
sound like a good thing for reproducibility but it has a quirk.
The issue appears when osbuild runs the qemu assembler: it needs to mount all
partitions of the future image to copy the OS tree into it.
Imagine that osbuild-composer is running on a system booted from an imaged
produced by osbuild-composer. This means that its root xfs partition has this
uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
When osbuild-composer builds an image on this system, it runs osbuild that
runs the qemu assembler at some point. As I said previously, it will mount
all partitions of the future image. That means that it will also try to
mount the root xfs partition with this uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
Do you remember this one? Yeah, it's the same one as before. However, the xfs
kernel driver doesn't like that. It contains a global table[1] of all xfs
partitions that forbids to mount 2 xfs partitions with the same uuid.
I mean... uuids are meant to be unique, right?
This commit changes the way we build RHEL 8.4 images: Each one now has a
unique uuid. It's now literally a unique universally unique identifier. haha
[1]: a349e4c659/fs/xfs/xfs_mount.c (L51)
cockpit-composer can now build rhel 8.4 images. Our distro name for
rhel 8.4 is rhel-84 unlike prior rhel releases which fall
under the umbrella name rhel-8. rhel 8.4 still uses the same
repos as the rest of the rhel 8 releases but points to a different
nightly repo for testing purposes. Test cases are added. The changes
between rhel 8.3 and 8.4 are as follows:
There is now a hybrid boot partition scheme for x86_64. x86_64 images
now use uefi boot and have 3 gpt partitions: a small unformated
partition for mbr compatibility, an efi boot partition of type vfat, and
a root partition of type xfs. The packages grub2-efi-x64 and shim-x64
are added as bootloader packages for all x86_64 images.
For qcow2 images ro is added as a kernel option and the following
packages are added (+) or removed (-):
+ dosfstools
+ efi-filesystem
+ efivar
+ efivar-libs
+ grub2-efi-x64
+ shim-x64
- rhn-client-tools
- rhnlib
- rhnsd
- rhn-setup
We no longer release into F31, and the right specfile was anyway not
being tested.
This allows us to remove a workaround that updates the VMs during
deploy, and other fedora-31 specific hacks.
Fedora 33 images can now be built and test cases are added for the new
images. The fedora 33 qcow2 and vmdk images are based off of the
official images and their kickstarters found here:
https://pagure.io/fedora-kickstarts. The fedora 33 iot image is based
off of the the config found here: https://pagure.io/fedora-iot/ostree.
The openstack, azure, and amazon image types have changes made to them
based off of the changes made to the qcow2. The changes between fedora
32 and fedora 33 are as follows:
Grub now loads its kernel command line options from
etc/kernel/cmdline, /usr/lib/kernel/cmdline, and /proc/cmdline instead
of from grub env. This is addressed by adding kernelCmdlineStageOptions
to use osbuild's kernel-cmdline stage to set these options. Alongside
`ro biosdevname=0 net.ifnames=0`, we also set `no_timer_check
console=tty1 console=ttyS0,115200n8` per what is set in the official
qcow2. For azure and amazon, the kernelOptions are still set as they
were in fedora 32.
The timezone is now set to UTC if a user does not set a timezone in the
blueprint customizations. Also, the hostname is set to
localhost.localdomain if the hostname isn't set in the blueprint.
Finally, the following packages have been removed:
polkit
geolite2-city
geolite2-country
zram-generator-defaults
Rather than getting a set of base packages from the ImageType, and then
appending the requested packages from the blueprint, pass the blueprint
into the new Packages() function, and return the full set of packages to
be depsolved.
This allows us to also append packages based on other customizations
too, and use that to append chrony when the timezone is set. This
matches the behavior anaconda had, and there was a TODO item to do this,
which had been overlooked.
Fixes#787.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Previously, all the osbuild-composer tools must be run from a directory with
dnf-json. This was often confusing, especially with the dnf-json-tests. This
commit changes the path to be absolute, so this is no longer an issue.
RPMMD had hardcoded path to dnf-json helper. This required all executables
using RPMMD to be run in the directory where dnf-json was located. This commit
makes RPMMD take the path to dnf-json as an argument. This allows its
consumers to specify whichever path they want.
Not a functional change
This is how it is used in the rest of the code, as a name to represent
the repository in the weldr API. Rename to match its use, and avoid
confusion with the ID passed to dnf-json, which is not the same.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When generating an osbuild manifest for an image type, we take a
customizations struct, which specifies the image-type-independent
customizations to apply. We also take the size argument, which is
specific to the image build and not part of the blueprint.
Introduce a new argument ImageOptions, which for now just wraps the size
argument. These options are specific to the image build/type, and
therefore does not belong with the other customizations.
For now this is a non-functional change, but follow-up commits will
introduce more types of image options.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As it turns out, the default expectation is not to distinguish between
these. We will now produce whatever is the most recent minor release by
default, and image tests will still be pinned at a given snapshot to be
reproducible.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In PR#395 we discussed the spelling of archs vs. arches and we agreed to
use arches. This patch only renames the public method `ListArchs`in the
`Distro` interface.
Require repository informaiton to be passed as input, rather than
read from the current directory.
Reading from the repository informaiton meant to be used by weldr
has several drawbacks.
- it makes it impractical to use the tool outside a git checkout
- it makes it awkward to adapt the repositories to different use
cases
- it means that the shipped repositories cannot be extended with
update repos, as the same repos are used for testing, and that
would render our tests non-reproducible.
Overall, we are moving towards making repositories something the
caller must always pass in, rather than something that composer
maintains. For the weldr API we need to keep working as before,
but for new APIs we are avoiding that.
Signed-off-by: Tom Gundersen <teg@jklm.no>
According to the new guidelines in docs/errors.md.
Note that this does not include code that marshals to a writer that
might fail (when a connection drops, for example).
The following commit will introduce support for forced architecture in
dnf-json. The APIs already have this kind of information, so we can
simply pass it to the Depsolve and FetchMetadata functions.
Require the caller to pass in the required distros explicitly. This
would allow us to easily add distros in osbuild-pipeline and tests
before exposing them in composer itself, for instance.
This means there is no longer a dependency from the distro package
to each of the individual distros, so the distros are now able
to depend on the distro packag for types and interfaces.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Mixing the way to build a distribution with where to get the source
packages from is wrong: it breaks pre-release repos, local mirrors, and
other use cases. To accommodate those, we introduced
`/etc/osbuild-composer/repositories`.
However, that doesn't work for the RCM API, which receives repository
URLs to use from outside requests. This API has been wrongly using the
`additionalRepos` parameter to inject those repos. That's broken,
because the resulting manifests contained both the installed repos and
the repos from the request.
To fix this, stop exposing repositories from the distros, but require
passing them on every call to `Manifest()`. This makes `additionalRepos`
redundant.
Fixes#341
This makes two changes simultaneously, to avoid too much churn:
- move accessors from being on the blueprint struct to the
customizations struct, and
- pass the customizations struct rather than the whole blueprint
as argumnet to distro.Manifest().
@larskarlitski pointed out in a previous review that it feels
redundant to pass the whole blueprint as well as the list of
packages to the Manifest funciton. Indeed it is, so this
simplifies things a bit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make the bluprint parameter a bool, and if set, then read a
blueprint from stdin, otherwise an empty blueprint is used.
Signed-off-by: Tom Gundersen <teg@jklm.no>