Instead of storing the (tree) data directly at the root of the
object specific directory, move it into a `data/tree` subfolder.
This prepares for two things:
1) the `tree` folder will allow us to add another folder next to
it to store metadata.
2) storing both, `tree` and the future metadata folder in a
common subfolder, prepares for the future integration
with the new caching layer (`FsCache`).
This commit introduces a new utility module called `fscache`. It
implements a cache module that stores data on the file system. It
supports parallel access and protects data with file-system locks. It
provides three basic functions:
FsCache.load("<name>"):
Loads the cache entry with the specified name, acquires a
read-lock and yields control to the caller to use the entry.
Once control returns, the entry is unlocked again.
If the entry cannot be found, a cache miss is signalled via
FsCache.MissError.
FsCache.store("<name>"):
Creates a new anonymous cache entry and yields control to the
caller to fill in. Once control returns, the entry is renamed
to the specified name, thus committing it to the object store.
FsCache.stage():
Create a new anonymous staging entry and yield control to the
caller. Once control returns, the entry is completely
discarded.
This is primarily used to create a working directory for osbuild
pipeline operations. The entries are volatile and automatic
cleanup is provided.
To commit a staging entry, you would eventually use
FsCache.store() and rename the entire data directory into the
non-volatile entry. If the staging area and store are on
different file-systems, or if the data is to be retained for
further operations, then the data directory needs to be copied.
Additionally, the cache maintains a size limit and discards any entries
if the limit is exceeded. Future extensions will implement cache pruning
if a configured watermark is reached, based on last-recently-used
logics.
Many more cache extensions are possible. This module introduces a first
draft of the most basic cache and hopefully lays ground for a new cache
infrastructure.
Lastly, note that this only introduces the utility helper. Further work
is required to hook it up with osbuild/objectstore.py.
Add a new utility that wraps ctypes.CDLL() for the self-embedded
libc.so. Initially, it only exposes renameat2(2), but more can be added
when needed in the future.
The Libc class is very similar to the existing LibCap class, with a
similar instantiation logic with singleton access.
In the future, the Libc class will allow access to other system calls
and libc.so functionality, when needed.
A new helper for the util.linux module which exposes the linux boot-id.
For security reasons, the boot-id is never exposed directly, but
instead only exposed through an application-id combined with the boot-id
via HMAC-SHA256.
Note that a raw kernel boot-id is always considered confidential, since
we never want an outside entity to deduce any information when they see
a boot-id used in protocol A and one in protocol B. It should not be
possible to tell whether both are from the same user and boot or not.
Hence, both should use their own boot-id namespace.
This adds a new accessor-function for the file-locking operations
through `fcntl(2)`. In particular, it adds the new function
`fcntl_flock()`, which wraps the `F_OFD_SETLK` command on `fcntl(2)`.
There were a few design considerations:
* The name `fcntl_flock` comes from the `struct flock` structure that
is the argument type of all file-locking syscalls. Furthermore, it
mirrors what the `fcntl` module already provides as a wrapper for
the classic file-locking syscall.
* The wrapper only exposes very limited access to the file-locking
commands. There already is `fcntl.fcntl()` and `fcntl.fcntl_flock()`
in the standard library, which expose the classic file-locks.
However, those are implemented in C, which gives much more freedom
and access to architecture dependent types and functions.
We do not have that freedom (see the in-code comments for the
things to consider when exposing more fcntl-locking features).
Hence, this only exposes a very limited set of functionality,
exactly the parts we need in the objectstore rework.
* We cannot use `fcntl.fcntl_flock()` from the standard library,
because we really want the `OFD` version. OFD stands for
`open-file-description`. These locks were introduced in 2014 to the
linux kernel and mirror what the non-OFD locks do, but bind the
locks to the file-description, rather than to a process. Therefore,
closing a file-description will release all held locks on that
file-description.
This is so much more convenient to work with, and much less
error-prone than the old-style locks. Hence, we really want these,
even if it means that we have to introduce this new helper.
* There is an open bug to add this to the python standard library:
https://bugs.python.org/issue22367
This is unresolved since 2014.
The implementation of the `fcntl_flock()` helper is straighforward and
should be easy to understand. However, the reasoning behind the design
decisions are not. Hence, the code contains a rather elaborate comment
explaining why it is done this way.
Lastly, this adds a small, but I think sufficient unit-test suite which
makes sure the API works as expected. It does not test for full
functionality of the underlying locking features, but that is not the
job of a wrapping layer, I think. But more tests can always be added.
This modification will allow a user to ask to mount the system as read
only for instance. Which would be super useful for image-info who is
progressively using more of OSbuild internals to mount partitions.
The `Object.{read,write}` methods were introduced to implement
copy on write support. Calling `write` would trigger the copy,
if the object had a `base`. Additionally, a level of indirection
was introduced via bind mounts, which allowed to hide the actual
path of the object in the store and make sure that `read` really
returned a read-only path.
Support for copy-on-write was recently removed[1], and thus the
need for the `read` and `write` methods. We lose the benefits
of the indirection, but they are not really needed: the path to
the object is not really hidden since one can always use the
`resolve_ref` method to obtain the actual store object path.
The read only property of build trees is ensured via read only
bind mounts in the build root.
Instead of using `read` and `write`, `Object` now gained a new
`tree` property that is the path to the objects tree and also
is implementing `__fspath__` and so behaves like an `os.PathLike`
object and can thus transparently be used in many places, like
e.g. `os.path.join` or `pathlib.Path`.
[1] 5346025031
When creating the JSON data, call `os.fspath` on all paths, like
`root` and `devices.tree` to ensure they are strings; this allows
for tree to be an object that conforms to `os.PathLike`.
If the object's id does not match with the one supplied for the
commit, we create a clone. Otherwise we store the tree.
The code path is arranged in a way that we always go through
`Object.store_tree` so we always call `Object.finalize` as a
prepration for the future, where we might actually do something
meaningful in the finalizer, like reset the *times or count the
tree size.
Remove copy-on-write support from `objectstore.Object`. The main
reason for introducing copy-on-write was to save an additional
copy in the non DAG-pipeline model[1]. With the introduction of
the latter and the explicit `--export` option, we can achieve the
same result without the complexity of copy-on-write semantics.
[1] See commit 39213b7, part of 3b7c87d5..42a365d1 changeset.
When committing an object to the store, clone it if the current
stage is not the latests stage, i.e. `todo` has still entries.
This is the second step of the removal of copy-on-write support
in `Object`.
Add a new `clone` parameter to the `commit` method on `ObjectStore`
that when used will clone the object to the store instead of using
the `store_tree` method which moves the object and resets it. This
is the first step of removing copy-on-write support from `Object`.
Add an new module with utility functions to inspect PE32+ files,
mainly listing the sections and their addresses and sizes.
Include a simple test to check that we can successfully parse the
EFI stub contained in systemd (systemd-udev package).
The consumer certs are used to uniquely identify a system against
candlepin. These consumer certs can be used to identify the system when
pulling from RH controlled ostree repositories.
Use the new `Index.detect_runner` method that will give us the best
available runner for a requested one. To do so a new `pipeline.Runner`
class is introduced that stores the `meta.RunnerInfo` class for the
specific runner and the original name that was requested.
In the manifest loading and describing functions of the formats, use
`Index.detect_runner` to get the `RunnerInfo` for a requested runner
and then wrap it in a `pipeline.Runner` object, which is then passed
to the `Manifest.add_pipeline` method.
See also commit "meta: ability to auto-detect runner".
Adjust all test.
Instead of relying on the assumption that the specific runner will
be in `/run/osbuild/lib/runners/` we now bind-mount the runner at a
specific well known path and execute it from there.
The way that runners were designed is the following: For each distro
we have a specific runner. In case a new version of the distro can
use the previous runner, we just create a symlink. In case a new
distro version needs adjustments, the runner is copied and adjusted.
This is a very clean and obvious design. There is one big drawback:
For each new distribution a symlink must be created before it can be
used. For Fedora that should ideally happen when it is branched; and
this will, ipso facto, always be a symlink since at the time of the
branching the new distro is the old distro. But at this very moment
osbuild will be broken since it does not contain the new runner; the
only way to prevent this is to create the corresponding new runner
before the distro is branched, where it then must be a symlink too.
This very much suggest that instead of the explicit symlink, which
does not /that/ much clarity, the existing "old" runner should just
work for the new distribution. This commit implements the logic to
do just that: all existing runners are parsed into a distro and
version tuple and then, given a specific requested distro, the best
matching one is return.
When calculating the checksum of the stage, the mount options were
not included. This was maybe deliberate, because if the mounts of
a stage change, it is very likely that previous stages change too.
But the introduction of non-device mounts, like ostree.deployment,
have changed the setting, since the content of the tree will be
different if that mount is applied or not. And even for the device
based mounts it will change the tree if e.g. a device is mounted
at at different path but otherwise is formatted with the very same
options. In the worst case we miss a few cache hits due to changes
in the mount setup that don't lead to tree changes, but that will
rarely happen in practice.