Using a metalink resolves to a specific mirror at runtime, and
downloads each rpm from that repository.
We want to move to using the org.osbuild.files source, which means
that we must save the url to each rpm in the source definition, which
will be determined by which mirror is used to generate the config.
If we use metalinks to generate the source configuration, the mirror
used will be arbitrary. Instead, we want to pick the best mirror
explicitly, ideally in a way that is independent of the location
depsolving happens in (which will be different from the location
the rpms are downloaded to).
We can choose explicitly by passing baseurl rather than metalink
to dnf, so move in that direction now by replacing all metalinks
by baseurls in our dnf configuration.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We now support sources and pipelines being passed to osbuild as one.
This will make the transformation from dnf to rpm stage simpler, as
the source object will then be different for each stage, so having
a shared one as now would be cumbersome.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As long as this matches the build environment, this does not make
a differenece, but let us not depend on this.
This will be useful when automatically transforming dnf to rpm
pipelines, as the platform_module_id is needed as input to
osbuild-composer's dnf-json tool.
Performed using this script:
```
cat $1 | jq '(.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
cat $1 | jq '(.build.pipeline.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
This adds two more os-release tests. One contains an empty os-release
file. We verify it is correctly parsed and ends up with the default
value.
The second one is the official Arch Linux os-release file. We verify
that we correctly end up with the rolling-release name.
Rename the function to `describe_os()`. We do no actual detection, nor
verification here. That is, the return value of this function is in no
way guaranteed to be a valid runner. That is, error-handling needs to
be done in the caller. Make this clear by renaming the function.
Note: Currently, in case no runner exists for the OS, we end up with:
execv(...) failed: No such file or directory
This needs to be fixed in the future.
Reading from an `Object` via `read` already uses a context manager
to manage the read-only bind mount and also maintain a count of
currently active readers. With this an attempt to start a new
`write` operation while readers were active can be detected and
an exception is throw. Since `write` was not introducing a context
the inverted situation, i.e. reads while a write is ongoing, was
not possible to detect.
This commit therefore introduces a context also for `.write` so
that we can enforce the policy to have either many readers but no
writers, or just one writer and no readers.
A bind mount is also used for write (in read-write mode) to hide
the internal path of the tree.
Verify that `Object` can have multiple readers but as long as at
least one reader is active, no one can write to it. Also checks
that after `Object` has left the context it is not writable anymore.
Verify the copy on write semantics of `objectstore.Object`, i.e.
content will only be copied at the moment a client wants to write
to `Object`. This also checks that `Object.base` works.
Modify the CI to execute the unit tests in a privileged container
because `Object.read()` works internally by bind mounting a path.
The mount operation needs at least CAP_SYS_ADMIN and overwriting
the file permissions CAP_DAC_OVERRIDE.
Do not automatically commit the last stage of the pipeline to the
store. The last stage is most likely not what should be cached,
because it will contain all the individual customization and thus
be very likely different for different users. Instead, the dnf or
rpm stages have a higher chance of being the same and thus are
better candidates for caching.
Technically this change is done via two big changes that build
upon new features introduces in the previous commits, most notably
the copy on write semantics of Object and that input/output is
being done via `objectstore.Object` instead of plain paths. The
first of the two big changes is to create one new `Object` at
the beginning of `pipeline.run` and use that, in write mode via
`Object.write` across invocations of `stage.run` calls, with
checkpoints being created after each stage on demand.
The very same `Object` is then used in read mode via `Object.read`
as the input tree for the Assembler. After the assembler is done
the resulting image/tree is manually committed to the store.
The other big change is to remove the `ObjectStore.commit` call
from the `ObjectStore.new` method and thus the automatic commit
after the last stage is gone.
NB: since the build tree is being retrieved in `get_buildtree`
from the store, a checkpoint for the last stage of the build
pipeline is forced for now. Future commits will refactor will
do away with that forced commit as well.
Change osbuildtest.TestCase to always create a checkpoint at
the final tree (the last stage of the pipeline), since tests
need it to check the tree contents.
As a result of the previous commits that implement copy on write
semantics, `commit` can now be used to create snapshots. Whenever
an Object is committed, its tree is moved to the store and it is
being reset, i.e. a new clean workdir is created and the old one
discarded. The moved tree is then set as the base of the reset
Object. On the next call to `write` the moved tree will be copied
over and forms the basis of the Object again. Should nobody want
to write to Object after the snapshot, i.e. the `commit`, no copy
will be made.
NB: snapshots/commits will act now act as synchronization points:
if a object with the same treesum, i.e. the very same content
already exists, the move (i.e. `store_tree`) will gracefully fail
and the existing content will be set as the base for Object.
Since Object knows its base now, the initialization of the tree
with the content of its base can be delayed until the moment
someone wants to actually modify the tree, thus implementing
copy on write semantics. For this a new `write` method is added
that will initialize the base and return the writable tree. It
should be used instead of `path` whenever the a client wants to
write to the tree of the Object.
Adapt the pipeline and the tests to use the new `write` method
in all the appropriate places.
NB: since the intention can not be inferred when using `path`
directly, the Object is still being initialized there.
Refactor the `ObjectStore.snapshot` method to take an `Object` not
a plain filesystem tree, so the latter is more encapsulated from
the ObjectStore user (e.g. the pipeline) and prepares a unified
code-path for `snapshot` and `commit` in the future.
Instead of just returning the path of the temporary object that is
created in .new() the actual instance of the new `Object` is being
returned, which can then provide a richer interface for clients
than a plain directory path.
Make the sources options a static property of the pipeline, in
particular of each stage, rather than being passed in on `run()`.
This more closely matches the intended semantics of sources and
pipeline having similar lifetimes and being fairly coupled together.
The difference between the pipeline and the sources is that the
sources do not contribute to identifying the pipeline (they are not
part of the hash for the pipeline id), and they could be swapped
out without changing the output image (as long as they are valid).
However, a pipeline without A sources object would not be useful,
and typically the pipeline and the sources are generated, passed
around and used together.
This is different from the build environment and the secrets object,
which both are specific to either the host or the caller, unlike
the pipeline which should be universal.
This changes the `load()` function to take a `manifest`, which is
a map containing both the pipeline and the sources.
Note that the semantics of the build-env parameter remains unchanged:
It shares the sources with the rest of the pipeline. We may want to
reconsider this in future commits, as the build-env is specific to
the host, whereas the regular pipeline is not.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Drop the rpm downloading and instead use the files source. This gives
us caching for free, and is the last missing step before we can
deprecate the dnf stage.
The main benefit of the rpm over the dnf stage is that we pin the package
versions rather than the repo metadata version. This will allow us to
support continuously changing repositories as individual packages are much
less likely to change than the repos iteself, and old packages are meant
to stay around for some time, unlike the repo metadata which is instantly
swapped out.
Depsolving is also slow on the first run, which we were always hitting as
the depsolving was always happening in a fresh container.
Based on a patch by Lars Karlitski.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This source adds support for downloaded files. The files are
indexed by their content hash, and the only option is their URL.
The main usecase for this will be downloading rpms. Allowing depsolving
to be done outside of osbuild, network access to be restricted and
downloaded rpms to be reused between runs.
Each source is now passed two additional arguments, a cache directory
and an output directory. Both are in the source's namespace, and
the source is responsible for managing them. Each directory may
contain contents from previous runs, but neither is ever guaranteed
to do so.
Downloaded contents may be saved to the cache and resued between
runs, and the requested content should be written to the output dir.
If secrets are used, the source must only ever write contents to
the output that corresponds to the available secrets (rather than
contents from the cache from previous runs).
Each stage is passed an additional argument, a sources directory.
The directory is read-only, and contains a subdirectory named after
each used source, which will contain the requseted contents when
the `Get()` call returns (if the source uses this functionality).
Based on a patch by Lars Karlitski.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Check the very basic operations of the object store, i.e. creating
multiple trees with different object_ids works and results in the
correct number of objects and references.
Add a stage test to check that the new kopts stage is creating the
target file /etc/kernel/cmdline with the right content. Since tree
diff currently seems to lack support for content hashing new files
we work around this by creating first an empty /etc/kernel/cmdline
file and then get a content diff with the desired options set.
The zipl stage is a fairly simple stage that just creates a file
in /etc called zipl.conf with a single configurable option, which
is called `timeout`. Check the file gets properly created with
the desired hash and verify that setting the timeout works.
Explain the concept and reason behind the grub2 core as well as the
details behind the selection of the core modules that get included.
Also elaborate a bit on the MBR gap. For more details about this see
https://en.wikipedia.org/wiki/GNU_GRUB#Version_2_(GRUB_2)
NB: This commit also changes the order of the grub modules, which in
turn changes the layout of the core.img and thus the hash value used
in the test; adapt those value to reflect the changed core.img.
Pipelines encode which source content they need in the form of
repository metadata checksums (or rpm checksums). In addition, they
encode where they fetch that source content from in the form of URLs.
This is overly specific and doesn't have to be in the pipeline's hash:
the checksum is enough to specify an image.
In practice, this precluded using alternative ways of getting at source
packages, such as local mirrors, which could speed up development.
Introduce a new osbuild API: sources. With it, a stage can query for a
way to fetch source content based on checksums.
The first such source is `org.osbuild.dnf`, which returns repository
configuration for a metadata checksum. Note that the dnf stage continues
to verify that the content it received matches the checksum it expects.
Sources are implemented as programs, living in a `sources` directory.
They are run on the host (i.e., uncontained) right now. Each source gets
passed options, which are taken from a new command line argument to
osbuild, and an array of checksums for which to return content.
This API is only available to stages right now.
The nbd device might not be ready after `qemu-nbd --connect` returns,
leading to access errors such as this further down:
sfdisk: cannot open /dev/nbd12: Inappropriate ioctl for device
Fix this by polling the device with `nbd-client --check <device>`.
Also, the nbd device might not be released after `qemu-nbd --disconnect`
returns. Fix this by using `nbd-client --disconnect`, which waits.
This introduces a new test dependency on nbd-client (in the ndb package
on Fedora).
A pipeline run only returned logs in the `StageFailed` and
`AssemblerFailed` exceptions. Remove those and always return structured
data instead.
It only returns data for stages that actually ran (i.e., didn't come
from the cache). This is similar to the output in interactive mode.
Also change osbuildtest to be able to deal with output that is larger
than the pipe buffer by using subprocess.communicate().
Commit 283281f broke compression by appending the argument last to the
tar command line. It needs to appear before the file.
Fix that and add a test.
[teg: add minor fix]
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
This commit adds semi-structured documentation to all osbuild stages and
assemblers. The variables added work like this:
* STAGE_DESC: Short description of the stage.
* STAGE_INFO: Longer documentation of the stage, including expected
behavior, required binaries, etc.
* STAGE_OPTS: A JSON Schema describing the stage's expected/allowed
options. (see https://json-schema.org/ for details)
It also has a little unittest to check stageinfo - specifically:
1. All (executable) stages in stages/* and assemblers/ must define strings named
STAGE_DESC, STAGE_INFO, and STAGE_OPTS
2. The contents of STAGE_OPTS must be valid JSON (if you put '{' '}'
around it)
3. STAGE_OPTS, if non-empty, should have a "properties" object
4. if STAGE_OPTS lists "required" properties, those need to be present
in the "properties" object.
The test is *not* included in .travis.yml because I'm not sure we want
to fail the build for this, but it's still helpful as a lint-style
check.
lorax-composer supports modifying timeservers, this stage implements it.
I was concerned if I should name this stage timeservers or chrony, but
I've decided to go with chrony. If some day in future Fedora/RHEL
changes the ntp client, we can easily introduce new stage named after
the new ntp client. Additionally, this solution enables us to create
systemd-timesyncd stage, which can change timeservers when chrony is not
installed (in that case systemd-timesyncd takes over the ntp
synchronization).
The tests from the integration_tests directory, were superseded
by the new stage tests.
The Vagrant integration seems not to have been working since
ea68bb0c26, as a test-setup.py was
dropped there, which it relies on. Remove it for now. If we want
that back, we should consider that in a separate PR.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Downloading the gpg key is fragile and kept causing our tests to fail.
In general, we want to limit the network access, so let's just embed
the gpg keys directly in the pipeline.
Fixes#133.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When the test runner receives SIGINT, osbuild's mounts stay around.
osbuild handles SIGING correctly, but it doesn't have time before being
killed because it's parent went away.
Fix this by waiting on it explicitly in the test runner.
Fixes#119
Similar to the existing test, but uses qemu-nbd to mount the generated
image.
Using unittest.TestCase.subTest() for now, which means that the tests
aren't very independent. I think this is fine in this case, because
we're testing images independently from each other, reusing the base
tree in the store.