The LoadRepositories function interates over a list of paths and expects
to find a distro configuration in one of them. The case when no path
with valid configuration is found was not handled. This patch introduces
the check.
Without passing in a cachedir, dnf would create a random one for every
invocation. This meant that caches were never reused, nor cleaned up
properly.
Let systemd create a cache directory for us in /var/cache/ and use
that via the environment variable systemd sets for us.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is not currently used. Drop it for now, until we use it, and
when we do, it should be reintroduced with the right name, to avoid
clashing with osbuild-composer (they are owned by different users,
so cannot be shared).
Signed-off-by: Tom Gundersen <teg@jklm.no>
As the worker can now be running on a different machine than the composer
it makes sense to install only worker binary on some machines. This commit
does exactly that - worker is now its own subpackage with the beautiful name
of golang-github-osbuild-composer-worker.
The main osbuild-composer package requires the worker subpackage, therefore
there will always be worker installed with composer. When composer is started
one local worker process will be spawned. If you don't want the default
worker process you need to mask its unit file:
systemctl mask osbuild-worker@1.service
There's a usecase for running workers at a different machine than
the composer. For example when there's need for making images for
architecture different then the composer is running at. Although osbuild has
some kind of support for cross-architecture builds, we still consider it
as experimental, not-yet-production-ready feature.
This commit adds a support to composer and worker to communicate using TCP.
To ensure safe communication through the wild worlds of Internet, TLS is not
only supported but even required when using TCP. Both server and client
TLS authentication are required. This means both sides must have their own
private key/certificate pair and both certificates must be signed using one
certificate authority. Examples how to generate all this fancy crypto stuff
can be found in Makefile.
Changes on the composer side:
When osbuild-remote-worker.socket is started before osbuild-composer.service,
osbuild-composer also serves jobqueue API on this socket. The unix domain
socket is not affected by this changes - it is enabled at all times
independently on the remote one. The osbuild-remote-worker.socket listens
by default on TCP port 8700.
When running the composer with remote worker socket enabled, the following
files are required:
- /etc/osbuild-composer/ca-crt.pem (CA certificate)
- /etc/osbuild-composer/composer-key.pem (composer private key)
- /etc/osbuild-composer/composer-crt.pem (composer certificate)
Changes on the worker side:
osbuild-worker has now --remote argument taking the address to a composer
instance. When present, the worker will try to establish TLS secured TCP
connection with the composer. When not present, the worker will use
the unix domain socket method. The unit template file osbuild-remote-worker
was added to simplify the spawning of workers. For example
systemctl start osbuild-remote-worker@example.com
starts a worker which will attempt to connect to the composer instance
running on the address example.com.
When running the worker with --remote argument, the following files are
required:
- /etc/osbuild-composer/ca-crt.pem (CA certificate)
- /etc/osbuild-composer/worker-key.pem (worker private key)
- /etc/osbuild-composer/worker-crt.pem (worker certificate)
By default osbuild-composer.service will always spawn one local worker.
If you don't want it you need to mask the default worker unit by:
systemctl mask osbuild-worker@1.service
Closing remarks:
Remember that both composer and worker certificate must be signed by
the same CA!
This is needed for unit tests, because it wasn't possible to mock the
rpmmd module before. This also requires that the checksum is moved to
the compose request and evaluated in the endpoint handler instead of
push compose. I think it makes sense to have the checksum in the compose
request directly.
Also a "module platform ID" is required now, but we don't have the
"global" distribution any more, so this patch introduces mapping from a
distribution to the module platform ID.
the name was misleading because the function could do more than just
download package list. In PushComposeRequest it is also used to fetch
checksums for the repositories, therefore I decided to rename it to
reflect this usage.
The change also requires customizations in the error handling, as some
errors are now handled automatically by the custom unmarshaler.
Include a note about HTTP return types.
There was a bug in the previous implementation which used to pass the
argument as a value but that does not work because we need to change the
value of it. The new implementation uses pass by reference.
Create a test to cover this scenario.
If the Epoch is > 0 the it should be added to the front of the version,
separated by a colon.
Also include a depsolve package with a non-zero Epoch and adjust the
tests accordingly.
Without making a deep copy of the blueprint the changes made to the
package and module versions will persist in memory, causing it to lose
the package and module version globs.
This can be seen by executing a freeze request and then a depsolve. The
blueprint included in the depsolve had the version globs replaced by the
frozen EVRA values.
The Blueprint struct is complex, deep, and full of references. This
means that any changes to it in memory will persist. Sometimes you need
an actual copy of it, so this adds DeepCopy which uses the json.Marshal
and Unmarshal functions to create a deep copy with no references to the
original.
This is not very efficient, but the alternative is adding Copy functions
to all the member structs and then calling them to build the copy.
This adds the modules to the list of package specs to be depsolved. It
includes a new function to build the version glob package string, as
well as tests for the new function and for depsolving with modules in
the blueprint.
This adds support for the modules field. It moves the version
replacement into a separate function, setPkgEVRA, and adds tests for the
new function as well as for blueprints with packages in both the
packages and modules lists.
The dependencies are not sorted, so depending on what order they were
returned in the freeze route would or would not return the correct
results (exhibited by the version being the original glob instead of the
EVRA).
This also fixes the tests so that the depsolve results are slightly
unsorted by adding a dep-package3 to the start of the list.
When creating a pipeline with the default image size, the size should no
longer be set to 0. Instead, the size is fetched using the distro
function GetSizeForOutputType which can return the default image size
for a given image type. This size can then be passed into the pipeline.
The current `NewRegistry` implementation allows for nil values in the
map, but this leads to subtle bugs when using the registry. This patch
enforces non-nil values by introducing additional checks before we
insert the value into the map.
The change unfortunately breaks a lot of tests and therefore it is
necessary to create additional mock: distro.
The new mock is used instead of the previous "real" implementation,
which used to contain nil values.
Images can be built for fedora 32. The pipeline generation and distro
tests are based off of the fedora 30 ones. Repository information has
also been added for the fedora 32 repos.
Images can be built for fedora 31. The pipeline generation and distro
tests are based off of the fedora 30 ones. Repository information has
also been added for the fedora 31 repos.
A script that runs various go tools (mod tidy, mod vendor, and fmt for
now).
The idea is that it prepares the source to be ready for master. As such,
running it on master shouldn't modify any files. Make sure of that by
adding a test.
RHEL requires the source code for dependencies to be included in the
srpm. The spec file already expects that, but we've only included the
vendored modules (i.e., the `vendor` directory) in the `rhel-8.2.`
branch. Move vendoring to master, so that we can build RHEL packages
from it as well.
This commit is the result of running `go mod vendor`, which includes the
vendored sources and updates go.mod and go.sum files.
Fedora requires the opposite: dependencies should not be vendored. The
spec file already ignores the `vendor` directory by default.
I wanted to create a unit test for this method but then I decided not to.
The reason is that if we add another field to ImageBuild but fail to
modify the test it won't catch the bug. I think higher level testing is
needed to cover this function.
This outputs the sources needed for the pipeline generated for the
distro. At the moment no pipelines require sources, and so this
always returns the empty list.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is the replacement for the DNF stage, containing only GPG
keys and package checksums. It is meant to be used together with
the files source to actually fetch the packages. Depsolving must
be done in composer and the full package list inserted into
the pipeline.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is unused for now, but will allow us to generate pipelines with
the pre-depsolved NEVRAs, so osbuild does not need to depsolve again.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This will detect inconsistent blueprints, and in the future will
allow us to use the returned packages for generating the pipeline.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The packag selection shown in the UI does not include the base
packages that will be included in the image, and they cannot,
because the base packages depends on the output type, and the UI
packages shown in the UI are independent of the output type.
It is possible to select packages incompatible with the base
packages. Discover this sooner rather than later, by including
the base packages in the final depsolve before creating the
pipeline.
In the future the result of the depsolve will be used to create
the pipeline, so this is another prerequisite for moving from
the dnf to the rpm stage.
Also depsolve the build packages for the same reason. Note that we
always set clean to false in this case, as the depsolving of the
main packages would have performed any cleaning necessary.
Also extend dnf-json to support excluding packages from depsolving.
Signed-off-by: Tom Gundersen <teg@jklm.no>