Everything that this field contained can be computed in another way:
- path: just lookup the local target and read the path from there
- mime: can be derived from distribution and compose output type
- size: can be derived from the path
Therefore it imho doesn't make much sense to store these information multiple
times.
In the future remote workers will be introduced. Obviously, the remote worker
cannot support the local target. Unfortunately, the current implementation of
storing the osbuild result is dependant on it.
This commit moves the responsibility of storing osbuild result to the
composer process instead of the worker process. The result is transferred from
a worker to a composer using extended HTTP API.
We treat the osbuild-pipeline as a tool run only within the source, therefore
we should search for the repository configs only in the current directory.
In the current state, osbuild-pipeline exits with random golang error,
such as goroutine failed, which is not at all helpful. This PR
introduces CLI arguments validation and helful error messages that use
the newly introduced types so that we don't waste time guessing what
was the right way to invoke this tool.
When creating a pipeline the assembler includes an image size. This
image size can be set when creating the pipeline but if it is 0 then a
default image size will be used. The default is 2 GB except for ami
images which are 6 GB.
Osbuild-composer expects two or three listeners and fails if there is
an unexpected number of listening sockets. Checking if there are not two
or if there are not three listeners always returns true even if there are
the desired number of listeners. Therefore, osbuild-composer always
crashes. The check now only crashes if there are a number of listeners
other than 2 or 3.
It contains two basic endpoints:
* POST /v1/compose
* GET /v1/compose/<uuid>
It passes all the tests, but cannot be used for the intended use case
because the store API does not (yet) support distributions and
architectures as a parameters.
Introduce a new osbuild-tests command and ship it in the -tests
sub-package.
The intended usecase is to install the -tests subpackage into an
otherwise pristine VM, and call the osbuild-tests binary over ssh
from the outside of the booted VM. If the binary exits with a return
code of 0, the tests passed, otherwise they failed. The VM should
not be reused after running the tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
During development of a new distro, we need to test composer against
nightly or beta repositories, but we cannot ship composer itself
with the nightly repository information hardcoded in. At the same
time, we want to distinguish between the system repositories of the
host and the repositories we use to generate images (the host may not
use the same distro/version/architecture as the target, and it may
include custom repositories that the target should not).
We therefore ship per distro repository information that can be
overriden (typically in testing) by dropping files in /etc.
For now use the latest nightlies for RHEL-8.2, we may want to
replace these with the official mirrors for GA eventually.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Introduce a DistroRegister object. For now this does not introduce
any functional changes, as the object is always instantited to be
the same. However, in follow-up patches it will get options.
Signed-off-by: Tom Gundersen <teg@jklm.no>
dnf-json relies on dnf's ability to cache repository metadata. This is
important, because the API calls it quite often to serve requests for
package lists and depsolves.
However, osbuild's dnf stage always fetches new metadata, because it
doesn't have access to the host's cache. Since metadata is valid for
some time, even after a repository changed, the checksum we put in
the pipeline might be old.
Force a new metadata download when producing the pipeline. This is still
not perfect, but greatly reduces the probability of putting stale
metadata into the pipeline.
Allow bootloader specific packages to be defined per architecture,
and allow repositories to depend on the architecture.
This does not altert he pipelines we produce, part from the ami
image now contains the grub2-pc package, rather than the grub2
package. This should make no difference.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Only panic on compile-time errors (e.g., built for unsupported
architecture). Otherwise, use log.Fatalf(), which is equivalent to
printing and exiting with return code 1. Only ever do this from
main(), in all other cases pass on the error object.
This is mostly relevant when the server disconects, in which case
we'll get EOF, and will now restart cleanly instead of panicing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The pipeline generation now takes the architecture as an argument.
Currently only x86_64 is supported. The architecture is detected
at start-up, and passed down to each pipeline translation.
For osbuild-pipeline we now requrie the architecture to be passed
in.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Instead of having a static repository checksum, set it dynamically from
the metadata that osbuild-composer last saw. This is implemented in
dnf-json, which returns the checksums for each repository on every call.
This enables the use of repositories that change over time, such as
fedora-updates. Note that the osbuild pipeline will break when such a
repository changes. This is intentional: pipelines have to be
reproducible.
Prior to this commit outputs directory used by local target was owned by root.
This made impossible for osbuild-composer to delete images. (osbuild-composer
doesn't run as root).
This commit introduces state directory in which osbuild-composer creates
outputs directory. Because this directory is owned by osbuild-composer, it's
able to delete files inside.
As a part of f4991cb1 ComposeEntry struct was removed from store package.
This change made sense because this struct is connected more with API than
with store - store uses its own Compose struct. In addition, converters
between Compose and ComposeEntry were added. Unfortunately, ComposeEntry
contains ImageSize which was not stored in Compose but retrieved from store
using GetImage method. This made those converters dependent on the store,
which was messy.
To solve this issue this commit adds image struct into Compose struct.
The content of image struct is generated on the worker side - when the worker
sets the compose status to FINISHED, it also sends Image struct with detailed
information about the result.
Make osbuild-composer use FromHost() directly. Everywhere else needs to
specify the distro explicitly.
Also don't panic when a distro doesn't exist. Instead, return nil. Make
sure all callers check for that.
Automatically registering on `init()` is clever, but a bit too magical
and easy to get wrong, because every binary must include all distros
somewhere.
Flip this inside out: distros now have a `New()`, which returns
something that implements the `Distro` interface. The distro package
explicitly creates all of them.
This means that distros cannot import distro itself anymore, because go
forbids import cycles. This only affected `InvalidOutputFormatError`.
Return a generic error for now.
Return the error code of the osbuild run, and an array of errors,
one for each target provided. If a target fails, all other targets
are still attempted.
If either osbuild or one of the targets retursn an error, the worker
notifies osbuild-composer that the job failed.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This takes a different approach to outputs and customizations, which is
much shorter and duplicates less code.
This uses links to internal repositories for now, because 8.2 hasn't
been released yet.
Add a `distro` flag to `osbuild-pipeline`.
This commandline tools uploads a file to S3, as a proof of concept.
All options are mandatory. Credentials are only read from the
commandline and not from the environment or configuration files.
The next step is to add support for importing from S3 to EC2,
currently the images we produce cannot be imported as-is, so this
requires more research.
To try this out: create an S3 bucket, get your credentials and
call the tool, passing any value as `key`. Note that if the key
already exists, it will be overwritten.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It uses Azure SDK to connect to Azure storage, creates a container there
and uploads the image. Unfortunately the API for page blobs does not
include some thread pool for upload so I implemented one myself. The
performance can be tweaked using the upload chunk size and number of
parallel threads.
The package is prepared to be refactored into common module within
internals package as soon as we agree on the of these common packages
for image upload.
Add azure-blob-storage rpm package as a dependency
It didn't work for me using the `golang(package)` syntax. Using the
package name explicitly works.
Make distros export repository information and use those in the weldr
API. This means that repos are only specified once and that the API
returns the right packages when we allow different distros.
The package list is generated on each request for a package so there is
no longer a need to generate the package list in main or to store these
packages in the API object.
We want to test API methods which calls dnf. Unfortunately, calling dnf
is expensive operation - it requires network access and downloading
a lot of (meta)data. This commit changes the rpmmd implementation
so that it can be mocked.
lorax-composer uses the host's repositories for building images. This is
prone to accidental configuration errors and duplicates functionality
(adding custom repositories via the source API is much more explicit).
However, blueprints don't specify the distribution they're based on.
This is something they should do in the future to enable cross-distro
image builds. Until then, read `/etc/os-release` to detect the host OS
and use that as the base distro for a blueprint.
Detection works by concatenating the ID field with a `-` and the
VERSION_ID field. This mandates how additional distributions should
register themselves to the `distro` package.
If composer cannot build the detected distro, fall back to fedora-30.
Use this detection everywhere that fedora-30 was hard-coded before.
Introduce the `distro` package, which contains an interface for OS
implementations. Its main purpose is to convert a blueprint to a
distro-specific pipeline.
Also introduce the `distro/fedora30` package. It is the first
implementation of the distro interface. Most of its code has been copied
with minimal modifications from the blueprint package.
The `blueprint` package is now back to serving a single purpose:
representing a weldr blueprint. It does not depend on the `pipeline`
package anymore.
Change osbuild-composer and osbuild-pipeline to use the new API,
hard-coding "fedora-30". This looks a bit weird now, but is the same
behavior as before.
All test cases now also take an "distro" key in the "compose" object.
Make each command accept a `repos` key containing repository
descriptions.
Make weldr API pass the repository like this. Nothing should change,
because the repos were the same (Fedora 30).
The main purpose of this is to share the structs between the server
and the client, and let the compiler ensure that our marshaling and
unmarshaling matches.
In the future we also want to make it easier to write unittests for
this code.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This tool outputs the pipeline associated with each output type. In
the future this should grow to take a blueprint as an input, and
also allow the arch/os/version/variant to be configured.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This hides the state hanlding in the store package. As before, it
can be disabled by passing `nil` instead of the path to the state
file.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Wrap the channel in Pop and Push methods, so it is not exposed to
the callers. PushCompose replaces the old AddCompose for consistency,
and PopCompose simply reads from the other end of the channel.
Signed-off-by: Tom Gundersen <teg@jklm.no>