Add conditional skip to some tests that depend on rpm-ostree
availability, but were not checking for its presence. These tests would
previously fail if rpm-ostree is not available. They will be skipped
now.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Instead of using `Path.stat` use `os.stat` since the former only
gained the `follow_symlinks` argument in 3.10 but we still need
to support Python 3.6 for RHEL 7 and 8.
Additionally, reduce the precision by converting timestamps to an
integer to avoid false negatives due to floating point arithmetic.
The cachedir-tag specification defines how to mark directories as
cache-directories. This allows tools like `tar` to ignore those
directories if desired (e.g., see `tar --ignore-caches`). This is very
useful to avoid huge cache-directories in backups and remote
synchronizations.
The spec simply defines a file called `CACHEDIR.TAG` with the first 43
bytes to be: "Signature: 8a477f597d28d172789f06886806bc55" (which
happens to be the MD5-checksum of ".IsCacheDirectory". Further content
is to be ignored. Any such files marks the directory in question as a
cache-directory.
The cachedir-tag has been successfully deployed in tools like `cargo`
and `VLC`, and is currently discussed to be implemented in Firefox. More
information is available here: https://bford.info/cachedir/
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Add an extension to the FsCache tests which verifies cache coherency and
atomicity of the FsCache implementation. Additionally, if available, it
utilizes a cache on NFS storage to test network-support.
Unfortunately, the stress-tests keep triggering kernel-oopses in the NFS
client driver, so they are disabled for now. However, once investigated,
we can re-enable them.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Add a helper that copies an entire directory tree including all metadata
into the cache. Use it in the ObjectStore to commit entries.
Unlike FsCache.store() this does not require entering the context from
the call-site. Instead, all data is directly passed to the cache and the
operation is under full control of the cache.
The ObjectStore is adjusted to make use of this. This requires exposing
the root-path (rather than the tree-path) to be accessible for
individual objects, hence a `path`-@property is added alongside the
`tree`-@property. Note that `__fspath__` still refers to the tree-path,
since this is the only path really required for outside access other
than from the object-manager itself.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Add a new `source_epoch` attribute that if set, will lead to all
mtimes that are newer or equal to the creation date being clamped
to the specified `source_epoch` time when the object is finalized.
New utility function to clamp all mtimes of a given path to a
certain timestamp. Clamp here means that any timestamp later
than the specified upper bound will be set to the upper bound.
Add a new field to the cache-information called `version`, which is a
simple integer that is incremented on any backward-incompatible change.
The cache-implementation is modified to avoid any access to the cache
except for `<cache>/staging/`. This means, changes to the staging area
must be backwards compatible at all cost. Furthermore, it means we can
always successfully run osbuild even on possibly incompatible caches,
because we can always just ignore the cache and fully rely on the
staging area being accessible.
The `load()` method will always return cache-misses. The `store()`
method simply discards the entry instead of storing it. Note that
`store()` needs to provide a context to the caller, hence this
implementation simply creates another staging-context to provide to the
caller and then discard. This is non-optimal, but keeps the API simple
and avoids raising an exception to the caller (but this can be changed
if it turns out to be problematic or unwanted).
Lastly, the `cache.info` field behaves as usual, since this is also the
field used to read the cache-version. However, this file is never
written to improve resiliency and allow blacklisting buggy versions from
the past.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Port the existing object store tests from `unittest` to `pytest`.
Allow all tests that can run without root privileges to do so. No
functional change of the test itself.
Integrate the recently added file system cache `FsCache` into our
object store `ObjectStore`. NB: This changes the semantics of it:
previously a call to `ObjectStore.commit` resulted in the object
being in the cache (i/o errors aside). But `FsCache.store`, which
is now the backing store for objects, will only commit objects if
there is enough space left. Thus we cannot rely that objects are
present for reading after a call to `FsCache.store`. To cope with
this we now always copy the object into the cache, even for cases
where we previously moved it: for the case where commit is called
with `object_id` matching `Object.id`, which is the case for when
`commit` is called for last stage in the pipeline. We could keep
this optimization but then we would have to special case it and
not call `commit` for these cases but only after we exported all
objects; or in other words, after we are sure we will never read
from any committed object again. The extra complexity seems not
worth it for the little gain of the optimization.
Convert all the tests for the new semantic and also remove a lot
of them that make no sense under this new paradigm.
Add a new command line option `--cache-max-size` which will set
the maximum size of the cache, if specified.
Code is based on `common.DataSizeToUint64` in Composer, with a
modification to allow `unlimited` so that the result is compatible
with `fscache.MaximumSizeType`.
[1] f4aed3e6e2/internal/common/helpers.go (L46)
In the `store_server` test, pass the store to `enter_context`,
instead of the `stack`; the latter is an interesting form of
recursion, and totally not what we want.
Instead of transmitting stage metadata over a socket and then
writing it via `Object.meta.write`, use the latter and bind
mount the corresponding file into the stage so it can directly
be written to from the stage. Change `api.metadata` to do so,
which means that this change is transparent for the stages.
Integrate the new `Metadata` object as `meta` property on `Object`.
Use it to actually store metadata after a successful stage run.
A new class `PathAdapter` is introduce which is in turned used to
expose the base path of `Object` as `os.PathLike` so it can be
passed as path to `Metadata`. The advantage is that any changes
to the base path in `Object` will automatically be picked up by
`Metadata`; the prominent, and currently only, case where this is
happening in `Object` is `store_tree`.
Implement a new class, nested inside `Object`, to read and write
metadata. It is indexed by a key and individual pieces of meta-
data are stored in separate files. Empty files are not created.
Create a proper `ObjectStore.Object` to use as tree for the `run`
method of `Stage`, since that is what is also normally passed to
it from `Pipeline.run`. It prepares for a future where `Object`
is not just used as `os.PathLike`.
Instead of storing the (tree) data directly at the root of the
object specific directory, move it into a `data/tree` subfolder.
This prepares for two things:
1) the `tree` folder will allow us to add another folder next to
it to store metadata.
2) storing both, `tree` and the future metadata folder in a
common subfolder, prepares for the future integration
with the new caching layer (`FsCache`).
This commit introduces a new utility module called `fscache`. It
implements a cache module that stores data on the file system. It
supports parallel access and protects data with file-system locks. It
provides three basic functions:
FsCache.load("<name>"):
Loads the cache entry with the specified name, acquires a
read-lock and yields control to the caller to use the entry.
Once control returns, the entry is unlocked again.
If the entry cannot be found, a cache miss is signalled via
FsCache.MissError.
FsCache.store("<name>"):
Creates a new anonymous cache entry and yields control to the
caller to fill in. Once control returns, the entry is renamed
to the specified name, thus committing it to the object store.
FsCache.stage():
Create a new anonymous staging entry and yield control to the
caller. Once control returns, the entry is completely
discarded.
This is primarily used to create a working directory for osbuild
pipeline operations. The entries are volatile and automatic
cleanup is provided.
To commit a staging entry, you would eventually use
FsCache.store() and rename the entire data directory into the
non-volatile entry. If the staging area and store are on
different file-systems, or if the data is to be retained for
further operations, then the data directory needs to be copied.
Additionally, the cache maintains a size limit and discards any entries
if the limit is exceeded. Future extensions will implement cache pruning
if a configured watermark is reached, based on last-recently-used
logics.
Many more cache extensions are possible. This module introduces a first
draft of the most basic cache and hopefully lays ground for a new cache
infrastructure.
Lastly, note that this only introduces the utility helper. Further work
is required to hook it up with osbuild/objectstore.py.
Add a new utility that wraps ctypes.CDLL() for the self-embedded
libc.so. Initially, it only exposes renameat2(2), but more can be added
when needed in the future.
The Libc class is very similar to the existing LibCap class, with a
similar instantiation logic with singleton access.
In the future, the Libc class will allow access to other system calls
and libc.so functionality, when needed.
A new helper for the util.linux module which exposes the linux boot-id.
For security reasons, the boot-id is never exposed directly, but
instead only exposed through an application-id combined with the boot-id
via HMAC-SHA256.
Note that a raw kernel boot-id is always considered confidential, since
we never want an outside entity to deduce any information when they see
a boot-id used in protocol A and one in protocol B. It should not be
possible to tell whether both are from the same user and boot or not.
Hence, both should use their own boot-id namespace.
This adds a new accessor-function for the file-locking operations
through `fcntl(2)`. In particular, it adds the new function
`fcntl_flock()`, which wraps the `F_OFD_SETLK` command on `fcntl(2)`.
There were a few design considerations:
* The name `fcntl_flock` comes from the `struct flock` structure that
is the argument type of all file-locking syscalls. Furthermore, it
mirrors what the `fcntl` module already provides as a wrapper for
the classic file-locking syscall.
* The wrapper only exposes very limited access to the file-locking
commands. There already is `fcntl.fcntl()` and `fcntl.fcntl_flock()`
in the standard library, which expose the classic file-locks.
However, those are implemented in C, which gives much more freedom
and access to architecture dependent types and functions.
We do not have that freedom (see the in-code comments for the
things to consider when exposing more fcntl-locking features).
Hence, this only exposes a very limited set of functionality,
exactly the parts we need in the objectstore rework.
* We cannot use `fcntl.fcntl_flock()` from the standard library,
because we really want the `OFD` version. OFD stands for
`open-file-description`. These locks were introduced in 2014 to the
linux kernel and mirror what the non-OFD locks do, but bind the
locks to the file-description, rather than to a process. Therefore,
closing a file-description will release all held locks on that
file-description.
This is so much more convenient to work with, and much less
error-prone than the old-style locks. Hence, we really want these,
even if it means that we have to introduce this new helper.
* There is an open bug to add this to the python standard library:
https://bugs.python.org/issue22367
This is unresolved since 2014.
The implementation of the `fcntl_flock()` helper is straighforward and
should be easy to understand. However, the reasoning behind the design
decisions are not. Hence, the code contains a rather elaborate comment
explaining why it is done this way.
Lastly, this adds a small, but I think sufficient unit-test suite which
makes sure the API works as expected. It does not test for full
functionality of the underlying locking features, but that is not the
job of a wrapping layer, I think. But more tests can always be added.
The `Object.{read,write}` methods were introduced to implement
copy on write support. Calling `write` would trigger the copy,
if the object had a `base`. Additionally, a level of indirection
was introduced via bind mounts, which allowed to hide the actual
path of the object in the store and make sure that `read` really
returned a read-only path.
Support for copy-on-write was recently removed[1], and thus the
need for the `read` and `write` methods. We lose the benefits
of the indirection, but they are not really needed: the path to
the object is not really hidden since one can always use the
`resolve_ref` method to obtain the actual store object path.
The read only property of build trees is ensured via read only
bind mounts in the build root.
Instead of using `read` and `write`, `Object` now gained a new
`tree` property that is the path to the objects tree and also
is implementing `__fspath__` and so behaves like an `os.PathLike`
object and can thus transparently be used in many places, like
e.g. `os.path.join` or `pathlib.Path`.
[1] 5346025031
If the object's id does not match with the one supplied for the
commit, we create a clone. Otherwise we store the tree.
The code path is arranged in a way that we always go through
`Object.store_tree` so we always call `Object.finalize` as a
prepration for the future, where we might actually do something
meaningful in the finalizer, like reset the *times or count the
tree size.
Remove copy-on-write support from `objectstore.Object`. The main
reason for introducing copy-on-write was to save an additional
copy in the non DAG-pipeline model[1]. With the introduction of
the latter and the explicit `--export` option, we can achieve the
same result without the complexity of copy-on-write semantics.
[1] See commit 39213b7, part of 3b7c87d5..42a365d1 changeset.
There is little use in sharing the store between test, quite to
opposite: all tests expect a clean store and some currently set
that up themselves. Create a fresh store for each test.
Add an new module with utility functions to inspect PE32+ files,
mainly listing the sections and their addresses and sizes.
Include a simple test to check that we can successfully parse the
EFI stub contained in systemd (systemd-udev package).
Use the new `Index.detect_runner` method that will give us the best
available runner for a requested one. To do so a new `pipeline.Runner`
class is introduced that stores the `meta.RunnerInfo` class for the
specific runner and the original name that was requested.
In the manifest loading and describing functions of the formats, use
`Index.detect_runner` to get the `RunnerInfo` for a requested runner
and then wrap it in a `pipeline.Runner` object, which is then passed
to the `Manifest.add_pipeline` method.
See also commit "meta: ability to auto-detect runner".
Adjust all test.
Instead of using a non-existing runner `org.osbuild.test` use an
existing one `org.osbuild.linux`. This prepares the switch to
using runner auto-detection, which will rely on existing runners.
For some reasons I forgot to fix those in the previous runs. Fix a
linter and pep8 warning.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Newer warning from pylint, also consistent with how we do things
elsewhere. Note that this only applies to one file in the tests but
disabling it would be very weird for such a small fix.
The idea of this test case was to check that two identical trees are
only stored once, via their treesum in the object store; but this
functionality was removed in commit e97f6ef34 and instead of treesums
random uuids are now used. As a result there is no de-duplication
anymore -- the subject of the test. So remove the test.
Add a new class `SubIdsDB` as a database of subordinate Ids, like the
ones in `/etc/subuid` and `/etc/subgid`. Methods to read and write
data from these two files are provided.
Add corresponding unit tests.
Add a new member variable `caps` that if not `None` indicates the
capabilities to retain, i.e. all other capabilities not specified
will be dropped via `bubblewrap` (`--cap-drop`).
Add corresponding tests.
This extends the possible ways of passing references to inputs. The
current ways possible are:
1) "plain references", an array of strings:
["ref1", "ref2", ...]
2) "object references", a mapping of keys to objects:
{"ref1": { <options> }, "ref2": { <options> }, ...}
This patch adds a new way:
3) "array of object references":
[{"id": "ref1", "options": { ... }}, {"id": ... }, ]
While osbuild promises to preserves the order for "object references"
not all JSON serialization libraries preserve the order since the
JSON specification does leave this up to the implementation.
The new "array of object references" thus allows for specifying the
references together with reference specific options and this in a
specific order.
Additionally this paves the way for specifying the same input twice,
e.g. in the case of the `org.osbuild.files` input where a pipeline
could then be specified twice with different files. This needs core
rework though, since internally we use dictionaries right now.
Specifically this test checks that the order given in the manifest is
preserved when loaded, i.e. the internal dict has the keys ordered in
the same way, independently in which way they were specified -- list
or object.