RHEL and Fedora used different output formats (.ami vs .raw.xz). The job
package assumed `image.ami`, which failed for RHEL.
Change Fedora to use image.raw.xz as well. This makes it consistent, but
we should also get the filename from the distro at some point.
This commit introduces basic support for upload API. Currently, all the routes
required by cockpit-composer are supported (except for /compose/log).
Also, ComposeEntry struct is moved outside of the store package. I decided
to do it because it isn't connected in any way to store, it's more connected
to API. Due to this move there's currently a known bug that image size is
not returned. This should be solved by moving Image struct inside Compose
struct by follow-up PR.
The return value hasn't been predictable prior to this commit. This is
probably caused by internal implementation of Golang map. This commit
fixes it by sorting the output.
Make osbuild-composer use FromHost() directly. Everywhere else needs to
specify the distro explicitly.
Also don't panic when a distro doesn't exist. Instead, return nil. Make
sure all callers check for that.
Automatically registering on `init()` is clever, but a bit too magical
and easy to get wrong, because every binary must include all distros
somewhere.
Flip this inside out: distros now have a `New()`, which returns
something that implements the `Distro` interface. The distro package
explicitly creates all of them.
This means that distros cannot import distro itself anymore, because go
forbids import cycles. This only affected `InvalidOutputFormatError`.
Return a generic error for now.
This includes adding (and excluding) the right packages, setting the right
systemd boot target, fixing kernel options, changing output type to
.raw.xz, and setting the size of the image to 6GB.
this is a small helper to enable easy switch between backends
in the existing test suite where we reference everything by name.
The current working variant is:
yum install $BACKEND
systemctl enable $BACKEND.socket
where BACKEND is either lorax-composer or osbuild-composer!
This makes sure that marshall/unmarshal are inverses of each other
for the most trivial instances of the various stage structs.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Return the error code of the osbuild run, and an array of errors,
one for each target provided. If a target fails, all other targets
are still attempted.
If either osbuild or one of the targets retursn an error, the worker
notifies osbuild-composer that the job failed.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Prior this commit there wasn't an easy to populate the store. The only way
was to call the weldr API or store methods. This design made testing of
various edges quite hard.
This commit adds store fixtures - an easy way how to define store state
before each test case.
In addition, the fixtures were refactored so that new instances are created
prior each test. Before this change the tests were in some cases dependant
on each other.
The helper functions in both api packages were more or less same. However,
over time they have been slowly diverging. This commit extract the helpers
into one common package to make the tests more maintainable and
to deduplicate the code.
This takes a different approach to outputs and customizations, which is
much shorter and duplicates less code.
This uses links to internal repositories for now, because 8.2 hasn't
been released yet.
Add a `distro` flag to `osbuild-pipeline`.
We should always specify a checksum when describing a repository to pull
content from. For now, fedora-30 duplicated the checksum. The idea is
that we can have it in one place at some point.
osbuild has a concept of runners now: scripts that set up a build
environment. Update the osbuild submodule to latest master, change
`Pipeline` to to the new buildroot description format, and use the
`org.osbuild.fedora30` runner from the fedora30 distro.
This slightly changes the customizations logic. We now make sure
that each stage is appended exactly once.
customizations.go are now responsible only for the things that are
completely generic, and not per-ouput-type. helpers.go contain more
high-level helpers that combine customziations and per-output-type
defaults.
This does not change the behaviour, though some pipelines are slightly
reordered to make them consistent.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This commandline tools uploads a file to S3, as a proof of concept.
All options are mandatory. Credentials are only read from the
commandline and not from the environment or configuration files.
The next step is to add support for importing from S3 to EC2,
currently the images we produce cannot be imported as-is, so this
requires more research.
To try this out: create an S3 bucket, get your credentials and
call the tool, passing any value as `key`. Note that if the key
already exists, it will be overwritten.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It uses Azure SDK to connect to Azure storage, creates a container there
and uploads the image. Unfortunately the API for page blobs does not
include some thread pool for upload so I implemented one myself. The
performance can be tweaked using the upload chunk size and number of
parallel threads.
The package is prepared to be refactored into common module within
internals package as soon as we agree on the of these common packages
for image upload.
Add azure-blob-storage rpm package as a dependency
It didn't work for me using the `golang(package)` syntax. Using the
package name explicitly works.