The current `NewRegistry` implementation allows for nil values in the
map, but this leads to subtle bugs when using the registry. This patch
enforces non-nil values by introducing additional checks before we
insert the value into the map.
The change unfortunately breaks a lot of tests and therefore it is
necessary to create additional mock: distro.
The new mock is used instead of the previous "real" implementation,
which used to contain nil values.
Images can be built for fedora 32. The pipeline generation and distro
tests are based off of the fedora 30 ones. Repository information has
also been added for the fedora 32 repos.
Images can be built for fedora 31. The pipeline generation and distro
tests are based off of the fedora 30 ones. Repository information has
also been added for the fedora 31 repos.
A script that runs various go tools (mod tidy, mod vendor, and fmt for
now).
The idea is that it prepares the source to be ready for master. As such,
running it on master shouldn't modify any files. Make sure of that by
adding a test.
RHEL requires the source code for dependencies to be included in the
srpm. The spec file already expects that, but we've only included the
vendored modules (i.e., the `vendor` directory) in the `rhel-8.2.`
branch. Move vendoring to master, so that we can build RHEL packages
from it as well.
This commit is the result of running `go mod vendor`, which includes the
vendored sources and updates go.mod and go.sum files.
Fedora requires the opposite: dependencies should not be vendored. The
spec file already ignores the `vendor` directory by default.
I wanted to create a unit test for this method but then I decided not to.
The reason is that if we add another field to ImageBuild but fail to
modify the test it won't catch the bug. I think higher level testing is
needed to cover this function.
This outputs the sources needed for the pipeline generated for the
distro. At the moment no pipelines require sources, and so this
always returns the empty list.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is the replacement for the DNF stage, containing only GPG
keys and package checksums. It is meant to be used together with
the files source to actually fetch the packages. Depsolving must
be done in composer and the full package list inserted into
the pipeline.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is unused for now, but will allow us to generate pipelines with
the pre-depsolved NEVRAs, so osbuild does not need to depsolve again.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This will detect inconsistent blueprints, and in the future will
allow us to use the returned packages for generating the pipeline.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The packag selection shown in the UI does not include the base
packages that will be included in the image, and they cannot,
because the base packages depends on the output type, and the UI
packages shown in the UI are independent of the output type.
It is possible to select packages incompatible with the base
packages. Discover this sooner rather than later, by including
the base packages in the final depsolve before creating the
pipeline.
In the future the result of the depsolve will be used to create
the pipeline, so this is another prerequisite for moving from
the dnf to the rpm stage.
Also depsolve the build packages for the same reason. Note that we
always set clean to false in this case, as the depsolving of the
main packages would have performed any cleaning necessary.
Also extend dnf-json to support excluding packages from depsolving.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We must avoid depending on the host's state in any way. This achieves
isolation in the following ways:
- rather than the default config file /dev/null is used
- rather than sharing the host persistent state dir a temporary one
is used and thrown away for each call
- the module_platform_id is set explicitly per supported distro, rather
than taken from /etc/os-release.
Optionally, the cache directory can be configured, as we may want to keep
this separate from the host, if for no other reason than accounting.
However, the cache appears to be well-behaved, so we can keep sharing
it between calls (or even with the host). This speeds up things
considerably, so this is definitely what we want.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In our base distro definitions we exclude packages in addition to
including them. Extend dnf-json to support this, so we can depsolve
the base package set as well as the packages added in blueprints.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In adition to the NEVRA, include the location and hash over the rpm
file. This allows us to separately fetch and verify that refernces
to RPMs are correct, as the NEVRA alone is not sufficient for fetching
nor verifying.
This is a prerequisite for using the rpm rather than the dnf stage
in our osbuild pipelines.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is needed for depsolving, so expose it from the distro package
so it can be passed to dnf-json (and not only to osbuild) as that does
depsolving too.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rename the package from `pipeline` to `osbuild` to reflect that it
will no longer be specific to pipelines, but rather covers all
osbuild datatypes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When support for osbuild result was added into osbuild-composer it was in
a bit hacky way - localtarget's location was reused as a path for the
result. This didn't make much sense because we want to store the result
even when image build has no localtarget.
Several past commits made store less dependant on the localtarget. The
responsibility for "holding the paths" to build artifacts was gradually
switched from the localtarget to the store while still maintaining
backwards compatibility - localtarget.Location still pointed at the
correct location.
This commit finishes the switch: local target now has no Location field.
The store is now fully responsible for managing the artifacts and paths
to them. LocalTarget is now just a simple "switch" - if image build has it,
then worker uploads an image into the store and it's then available for
download using the weldr API.
Prior this commit local target copied the image from a worker to a composer
using cp(1) command. This prevented the local target to work on remote
workers.
This commit switches the local target implementation to using the jobqueue
API introduced in the previous commit. I had some concerns about speed
of this solution (imho nothing can beat pure cp(1) implementation) but
ad hoc sanity tests showed the copying of the image using the jobqueue API
when running the worker on the same machine as the composer is still
more or less instant.
In #221 Compose was refactored: Now it can have multiple image builds. More
image builds result in more jobs. Each job has its own result (logs from
osbuild). Additionally, also targets are now a part of image build. With
local target this effectively means we can have multiple images per compose.
However, these artifacts (images & results) were stored only per compose
prior this commit, thus rendering the behaviour of composes with multiple
image builds undefined and racy.
This commit fixes it by storing all the artifacts per image build instead of
per compose. To achieve this feature, getComposeDirectory and
getImageBuildDirectory methods were created to centralize the path assembly.
Paths to artifacts prior this commit:
${COMPOSER_STATE_DIR}/outputs/${COMPOSE_ID}/*
Paths to artifacts after this commit:
${COMPOSER_STATE_DIR}/outputs/${COMPOSE_ID}/${IMAGE_BUILD_ID}/*
Imho it makes more sense from REST perspective. Also, in the future there
will be ROUTE for uploading image to image build. As it's not a good idea
transport file inside JSON, all the parameters (compose id and image
build id) need to be inside the URL. Therefore for the sake of consistency,
all these routes should have compose id and image build id in the URL.
There is another solution to embedding multiple values inside http body
which allows file transport - multipart/form-data. I think using form-data
is worth when doing more complex stuff, for our usecase transporting all
the metadata in the URL is more appropriate solution.
Everything that this field contained can be computed in another way:
- path: just lookup the local target and read the path from there
- mime: can be derived from distribution and compose output type
- size: can be derived from the path
Therefore it imho doesn't make much sense to store these information multiple
times.
There's no connection between user-specified image size and actual image
size (which might differ du to a compression). Therefore, it's not needed
to check if the actual image exists if we want to return just the
user-specified size.
When requesting the compose status, a user may want to filter the list
of composes by blueprint name, compose status, and/or compose type. These
filters can now be set in the /compose/status route's url as the queries
blueprint, status, and type.
The compose now contains multiple image builds, but Weldr API does not
support this feature. Use the first image build every time.
Also start using the new types instead of plain strings.
All the preceding changes in jobqueue, compose package, common types
etc. were a base for refactoring the store so that a compose can handle
multiple image builds. This commit introduces the change in a backwards
compatible way, so that the weldr api don't change.
The compose will soon move to a concept of including multiple image
builds per one compose, we need to accommodate extra identifier to
handle this scenario.
The problem with the previous one was that we used artifical image
types, architectures, and distribution but that is not possible any more
because we don't want to use plain strings any more. This commit
introduces a new testing distro which uses proper, existing types.
The store package is getting too big and convoluted, this new package
will help with separation of the compose logic from our current
implementation of the store.
The current implementation will be removed in following commits.
These states will be used for tracking the image builds and compose
states in the rest of our codebase. There should be no change in the
behavior. It is a 1 to 1 replacement with the only difference of using
type alias instead of plain string.
we want to move to a state where composer is a stateless service,
therefore we cannot take blueprints by name (and fetch them from a
storage) or assume repositories by the distribution. This commit
introduces both of these parameters as part of the structure.