In our base distro definitions we exclude packages in addition to
including them. Extend dnf-json to support this, so we can depsolve
the base package set as well as the packages added in blueprints.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When support for osbuild result was added into osbuild-composer it was in
a bit hacky way - localtarget's location was reused as a path for the
result. This didn't make much sense because we want to store the result
even when image build has no localtarget.
Several past commits made store less dependant on the localtarget. The
responsibility for "holding the paths" to build artifacts was gradually
switched from the localtarget to the store while still maintaining
backwards compatibility - localtarget.Location still pointed at the
correct location.
This commit finishes the switch: local target now has no Location field.
The store is now fully responsible for managing the artifacts and paths
to them. LocalTarget is now just a simple "switch" - if image build has it,
then worker uploads an image into the store and it's then available for
download using the weldr API.
In #221 Compose was refactored: Now it can have multiple image builds. More
image builds result in more jobs. Each job has its own result (logs from
osbuild). Additionally, also targets are now a part of image build. With
local target this effectively means we can have multiple images per compose.
However, these artifacts (images & results) were stored only per compose
prior this commit, thus rendering the behaviour of composes with multiple
image builds undefined and racy.
This commit fixes it by storing all the artifacts per image build instead of
per compose. To achieve this feature, getComposeDirectory and
getImageBuildDirectory methods were created to centralize the path assembly.
Paths to artifacts prior this commit:
${COMPOSER_STATE_DIR}/outputs/${COMPOSE_ID}/*
Paths to artifacts after this commit:
${COMPOSER_STATE_DIR}/outputs/${COMPOSE_ID}/${IMAGE_BUILD_ID}/*
Everything that this field contained can be computed in another way:
- path: just lookup the local target and read the path from there
- mime: can be derived from distribution and compose output type
- size: can be derived from the path
Therefore it imho doesn't make much sense to store these information multiple
times.
There's no connection between user-specified image size and actual image
size (which might differ du to a compression). Therefore, it's not needed
to check if the actual image exists if we want to return just the
user-specified size.
When requesting the compose status, a user may want to filter the list
of composes by blueprint name, compose status, and/or compose type. These
filters can now be set in the /compose/status route's url as the queries
blueprint, status, and type.
The compose now contains multiple image builds, but Weldr API does not
support this feature. Use the first image build every time.
Also start using the new types instead of plain strings.
When a user does not define the image size for a compose the default
image size of that image type is used. In order to properly store the
compose's image size even if the default is used the store calls the
distro function GetSizeForOutputType. This function accepts an output
format and image size. If the image size is 0 then the default
value for the output format will be returned. Also, for vhd images the
size must be rounded. This is now handled in the distro function instead
of the api.
When a use defines the image size for a compose this size is stored in
the compose struct so that the virtual image size can be returned by the
api instead of the file size of the image.
When creating a compose the desired image size can be set. If the image
type is a VHD the image size is rounded up to the nearest MB since all
VHDs on Azure must have a virtual size aligned to 1 MB.
Commit b1c5ef2a introduced support for retrieving logs from osbuild.
This commit finishes the second part - actually returning the logs
from /compose/logs route.
dnf-json relies on dnf's ability to cache repository metadata. This is
important, because the API calls it quite often to serve requests for
package lists and depsolves.
However, osbuild's dnf stage always fetches new metadata, because it
doesn't have access to the host's cache. Since metadata is valid for
some time, even after a repository changed, the checksum we put in
the pipeline might be old.
Force a new metadata download when producing the pipeline. This is still
not perfect, but greatly reduces the probability of putting stale
metadata into the pipeline.
Allow bootloader specific packages to be defined per architecture,
and allow repositories to depend on the architecture.
This does not altert he pipelines we produce, part from the ami
image now contains the grub2-pc package, rather than the grub2
package. This should make no difference.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The pipeline generation now takes the architecture as an argument.
Currently only x86_64 is supported. The architecture is detected
at start-up, and passed down to each pipeline translation.
For osbuild-pipeline we now requrie the architecture to be passed
in.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Instead of having a static repository checksum, set it dynamically from
the metadata that osbuild-composer last saw. This is implemented in
dnf-json, which returns the checksums for each repository on every call.
This enables the use of repositories that change over time, such as
fedora-updates. Note that the osbuild pipeline will break when such a
repository changes. This is intentional: pipelines have to be
reproducible.
The implementation is just a stub returning always the same tar archive.
The ability to return actual logs will be implemented in the future - osbuild
isn't currently returning any logs.
Prior to this commit blueprint getters looked like C-style API with output
parameters. This commit refactors them to more conventional multiple return
values API.
As a part of f4991cb1 ComposeEntry struct was removed from store package.
This change made sense because this struct is connected more with API than
with store - store uses its own Compose struct. In addition, converters
between Compose and ComposeEntry were added. Unfortunately, ComposeEntry
contains ImageSize which was not stored in Compose but retrieved from store
using GetImage method. This made those converters dependent on the store,
which was messy.
To solve this issue this commit adds image struct into Compose struct.
The content of image struct is generated on the worker side - when the worker
sets the compose status to FINISHED, it also sends Image struct with detailed
information about the result.
This commit introduces basic support for upload API. Currently, all the routes
required by cockpit-composer are supported (except for /compose/log).
Also, ComposeEntry struct is moved outside of the store package. I decided
to do it because it isn't connected in any way to store, it's more connected
to API. Due to this move there's currently a known bug that image size is
not returned. This should be solved by moving Image struct inside Compose
struct by follow-up PR.
Make osbuild-composer use FromHost() directly. Everywhere else needs to
specify the distro explicitly.
Also don't panic when a distro doesn't exist. Instead, return nil. Make
sure all callers check for that.
Automatically registering on `init()` is clever, but a bit too magical
and easy to get wrong, because every binary must include all distros
somewhere.
Flip this inside out: distros now have a `New()`, which returns
something that implements the `Distro` interface. The distro package
explicitly creates all of them.
This means that distros cannot import distro itself anymore, because go
forbids import cycles. This only affected `InvalidOutputFormatError`.
Return a generic error for now.
lorax-composer recently introduced API version 1. This commit introduces
very basic support for it. This implementation tries to deduplicate code
for routes with the same behaviour as much as possible. All the differences of
v1 API are marked as TODOs for now and will be implemented in follow-ups PRs.
Make distros export repository information and use those in the weldr
API. This means that repos are only specified once and that the API
returns the right packages when we allow different distros.
Split the error case (no sources specified) into its own function, so
that we can use `source/info/:sources` (note the colon) to get the list
of sources without the leading `/`. This gets rid of two special cases
which made the previous implementation hard to parse.
The naming is confusing: repositories have an `id` and a human readable
`name`. Weldr's sources also have a field called `name`, but
lorax-composer uses that as a way to identify repositories by their id.
Use `id` consistently here as well.
The blueprint freeze route returns the blueprint info but each
package will be the package selected by depsolving. So, instead of the
version being the version number with optional wildcards as
/blueprints/info would provide, the version is of the form
`Version-Release.Arch`.
These endpoint are similar in many ways, therefore just one commit. Their
functionality is basically same as in lorax except for error messages and
weird edge cases when handling trailing slashes.
closes#64, closes#65
Add a route to set a blueprint back to its state at a particular change.
The route `blueprints/undo/:blueprint/:commit` requires the blueprint
name and the commit hash for the change that the blueprint should be
reverted too. Also, the commit message for the change created when a
blueprint is pushed is now passed from the api to the store's
PushBlueprint funtion.
The package list is generated on each request for a package so there is
no longer a need to generate the package list in main or to store these
packages in the API object.
The package list is now generated from the base repo and the
user-defined repos. This allows users to add packages not found in the
base repo to their blueprints.