The `directory` argument has been added only since Python 3.7, which
breaks the unit test on Python 3.6.
Reimplement the intended behavior by overriding the `translate_path()`
method, which takes the `directory` value into account on newer Python
versions.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Do not specify the default value for 'expected_size' argument in
assertImageFile() function declaration. Previously, it was set to
`None`, which was never taken into account. Moreover, all callers of the
function always provide an explicit value, so the default was never
really used.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add a new optional pytest CLI argument `--unsupported-fs` allowing to
specify file-systems which should be treated as unsupported in the
platform where running tests. Any test cases dependent on such
file-system support will be sipped.
This will allow to run unit tests and selectively skipping test cases
for unsupported file-systems.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Port assembler tests from unittest to pytest. In addition, use
parametrized tests when testing various filesystems and various
combinations.
This is important to be able to selectively skip the test for if a
specific filesystem is not supported by the kernel (e.g. btrfs is not
supported on RHEL). Skipping a unittest subtest is not possible, which
is the motivation to move away from it and use only pytest.
Test output is now also much nicer for parametrized test cases.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Previously, the unit test depended on osbuild modules being installed on
the system. As a result, this made the test not work in CI where we do
not install osbuild when running unit tests. In addition, the stage
executed by the unit test would use different version of osbuild
internals than the version that is being tests, which could result in
issues or not testing the intended code.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The directory does not exist when the unit test is run in CI. Handle
this case by ensuring that parent directories are created as needed.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The right way to enable services is to use a preset file instead of
writing directly into /etc. This adds a new stage called
`org.osbuild.systemd.preset` to do so.
Added another skopeo stage to skopeo/a.mpp.json with a skopeo source for
a container hosted on the osbuild-composer gitlab registry. The name
points to a manifest list, which refers to two containers (amd64 and
arm64) that contain a single text file (README.md). The `index` field
is enabled to include the manifest-list as an extra input to the stage.
The diff is updated with the new expected file list.
The containers were created with buildah:
amd=$(buildah from --arch=amd64 scratch)
arm=$(buildah from --arch=arm64 scratch)
buildah config --created-by "Achilleas Koutsou" "${amd}"
buildah config --created-by "Achilleas Koutsou" "${arm}"
buildah copy "${amd}" README.md
buildah copy "${arm}" README.md
amdid=$(buildah commit --format=docker --rm "${amd}")
armid=$(buildah commit --format=docker --rm "${arm}")
name="registry.gitlab.com/redhat/services/products/image-builder/ci/osbuild-composer/manifest-list-test"
buildah manifest create "${name}" "${amdid}" "${armid}"
podman manifest push --all "${name}" dir:container
Extend the copy stage to optionally allow removing the destination
before copying. This allows one to not follow symlinks if the
destination is a symlink to a file. By default, `cp` would change
the file pointed to by the destination if it is symlink.
Extend the stage doc text to cover the behavior with regard to
destination being a symlink.
Add unit tests for the copy stage to also test the newly added option.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add a top level property "files" to the schema and move the rest of the
existing schema one level down. This way we can support adding global
properties in the future if we ever need to expand the scope of the
stage.
Pattern for valid environment variable names as defined in
The Open Group Base Specifications Issue 7, 2018 edition
IEEE Std 1003.1-2017 (Revision of IEEE Std 1003.1-2008)
https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html
Updated tests to match UPPERCASE ONLY var names.
Add a new `org.osbuild.chown` stage for setting user and group ownershop
of files. The stage runs the `chown` from the image using `chroot` to
enable it to use users and groups that exist only in the (image) tree.
Add unit test testing the stage in various scenarios.
Co-authored-by: Janine Olear <pninak@web.de>
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Before we could only ask OSBuild to mount a device as readonly. But
devices can have more mount options than this. Supporting more options
is necessary for the new version of image-info that is using OSBuild's
internals in order to mount the image it wants to work on. Otherwise,
for instance, some umasks aren't applied properly and we can get
differences in rpm-verify results, thus corrupting the DB.
Mount is now accepting:
* readonly
* uid
* gid
* umask
* shortname
This reverts commit a988aacf99.
After some discussion, the original behavior was intentional. With the
added support for gracefully handling the existence of directories, the
stage would originally not set the mode of an existing directory, while
now it will. Additional issue is that `mkdir` applies the provided mode
- umask, which was intentional. Setting the same mode without taking
umask value into account is not desired.
Add a new optional stage option to not fail if the specified directory
already exists. This will make it easier to support creation of custom
repositories via customizations in osbuild-composer. The reason is that
if a specified directory exists in an image, because it was created by
an RPM, then creating it would fail. However, the user may have
specified different mode for the directory, than it already has. Since
there is no way to know for sure if the directory already exists on the
image, without building the image itself, it is desired to handle this
case gracefully as valid in specific use cases.
The default behavior stays the same - specifying an existing directory
path will lead to an error.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Instead of using `Path.stat` use `os.stat` since the former only
gained the `follow_symlinks` argument in 3.10 but we still need
to support Python 3.6 for RHEL 7 and 8.
Additionally, reduce the precision by converting timestamps to an
integer to avoid false negatives due to floating point arithmetic.
The cachedir-tag specification defines how to mark directories as
cache-directories. This allows tools like `tar` to ignore those
directories if desired (e.g., see `tar --ignore-caches`). This is very
useful to avoid huge cache-directories in backups and remote
synchronizations.
The spec simply defines a file called `CACHEDIR.TAG` with the first 43
bytes to be: "Signature: 8a477f597d28d172789f06886806bc55" (which
happens to be the MD5-checksum of ".IsCacheDirectory". Further content
is to be ignored. Any such files marks the directory in question as a
cache-directory.
The cachedir-tag has been successfully deployed in tools like `cargo`
and `VLC`, and is currently discussed to be implemented in Firefox. More
information is available here: https://bford.info/cachedir/
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Add an extension to the FsCache tests which verifies cache coherency and
atomicity of the FsCache implementation. Additionally, if available, it
utilizes a cache on NFS storage to test network-support.
Unfortunately, the stress-tests keep triggering kernel-oopses in the NFS
client driver, so they are disabled for now. However, once investigated,
we can re-enable them.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Add a helper that copies an entire directory tree including all metadata
into the cache. Use it in the ObjectStore to commit entries.
Unlike FsCache.store() this does not require entering the context from
the call-site. Instead, all data is directly passed to the cache and the
operation is under full control of the cache.
The ObjectStore is adjusted to make use of this. This requires exposing
the root-path (rather than the tree-path) to be accessible for
individual objects, hence a `path`-@property is added alongside the
`tree`-@property. Note that `__fspath__` still refers to the tree-path,
since this is the only path really required for outside access other
than from the object-manager itself.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Add a new `source_epoch` attribute that if set, will lead to all
mtimes that are newer or equal to the creation date being clamped
to the specified `source_epoch` time when the object is finalized.
New utility function to clamp all mtimes of a given path to a
certain timestamp. Clamp here means that any timestamp later
than the specified upper bound will be set to the upper bound.
Add a new field to the cache-information called `version`, which is a
simple integer that is incremented on any backward-incompatible change.
The cache-implementation is modified to avoid any access to the cache
except for `<cache>/staging/`. This means, changes to the staging area
must be backwards compatible at all cost. Furthermore, it means we can
always successfully run osbuild even on possibly incompatible caches,
because we can always just ignore the cache and fully rely on the
staging area being accessible.
The `load()` method will always return cache-misses. The `store()`
method simply discards the entry instead of storing it. Note that
`store()` needs to provide a context to the caller, hence this
implementation simply creates another staging-context to provide to the
caller and then discard. This is non-optimal, but keeps the API simple
and avoids raising an exception to the caller (but this can be changed
if it turns out to be problematic or unwanted).
Lastly, the `cache.info` field behaves as usual, since this is also the
field used to read the cache-version. However, this file is never
written to improve resiliency and allow blacklisting buggy versions from
the past.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Port the existing object store tests from `unittest` to `pytest`.
Allow all tests that can run without root privileges to do so. No
functional change of the test itself.
Integrate the recently added file system cache `FsCache` into our
object store `ObjectStore`. NB: This changes the semantics of it:
previously a call to `ObjectStore.commit` resulted in the object
being in the cache (i/o errors aside). But `FsCache.store`, which
is now the backing store for objects, will only commit objects if
there is enough space left. Thus we cannot rely that objects are
present for reading after a call to `FsCache.store`. To cope with
this we now always copy the object into the cache, even for cases
where we previously moved it: for the case where commit is called
with `object_id` matching `Object.id`, which is the case for when
`commit` is called for last stage in the pipeline. We could keep
this optimization but then we would have to special case it and
not call `commit` for these cases but only after we exported all
objects; or in other words, after we are sure we will never read
from any committed object again. The extra complexity seems not
worth it for the little gain of the optimization.
Convert all the tests for the new semantic and also remove a lot
of them that make no sense under this new paradigm.
Add a new command line option `--cache-max-size` which will set
the maximum size of the cache, if specified.
Code is based on `common.DataSizeToUint64` in Composer, with a
modification to allow `unlimited` so that the result is compatible
with `fscache.MaximumSizeType`.
[1] f4aed3e6e2/internal/common/helpers.go (L46)
In the `store_server` test, pass the store to `enter_context`,
instead of the `stack`; the latter is an interesting form of
recursion, and totally not what we want.
Instead of transmitting stage metadata over a socket and then
writing it via `Object.meta.write`, use the latter and bind
mount the corresponding file into the stage so it can directly
be written to from the stage. Change `api.metadata` to do so,
which means that this change is transparent for the stages.
Integrate the new `Metadata` object as `meta` property on `Object`.
Use it to actually store metadata after a successful stage run.
A new class `PathAdapter` is introduce which is in turned used to
expose the base path of `Object` as `os.PathLike` so it can be
passed as path to `Metadata`. The advantage is that any changes
to the base path in `Object` will automatically be picked up by
`Metadata`; the prominent, and currently only, case where this is
happening in `Object` is `store_tree`.
Implement a new class, nested inside `Object`, to read and write
metadata. It is indexed by a key and individual pieces of meta-
data are stored in separate files. Empty files are not created.
Create a proper `ObjectStore.Object` to use as tree for the `run`
method of `Stage`, since that is what is also normally passed to
it from `Pipeline.run`. It prepares for a future where `Object`
is not just used as `os.PathLike`.
Instead of storing the (tree) data directly at the root of the
object specific directory, move it into a `data/tree` subfolder.
This prepares for two things:
1) the `tree` folder will allow us to add another folder next to
it to store metadata.
2) storing both, `tree` and the future metadata folder in a
common subfolder, prepares for the future integration
with the new caching layer (`FsCache`).
If a home directory is specified for an existing user that does
not have one, `usermod` does not create one. This case is now
detected and `mkhomedir_helper(8)` is run inside the chroot to
create the home dir. In Fedora this utility is provided by the
`pam` package so this is now installed in the corresponding
tests together with a new user that simulates the aforementioned
scenario.
Enahnce the stage description: drop an superflous line and add
a description for the home-dir scenario.
This commit introduces a new utility module called `fscache`. It
implements a cache module that stores data on the file system. It
supports parallel access and protects data with file-system locks. It
provides three basic functions:
FsCache.load("<name>"):
Loads the cache entry with the specified name, acquires a
read-lock and yields control to the caller to use the entry.
Once control returns, the entry is unlocked again.
If the entry cannot be found, a cache miss is signalled via
FsCache.MissError.
FsCache.store("<name>"):
Creates a new anonymous cache entry and yields control to the
caller to fill in. Once control returns, the entry is renamed
to the specified name, thus committing it to the object store.
FsCache.stage():
Create a new anonymous staging entry and yield control to the
caller. Once control returns, the entry is completely
discarded.
This is primarily used to create a working directory for osbuild
pipeline operations. The entries are volatile and automatic
cleanup is provided.
To commit a staging entry, you would eventually use
FsCache.store() and rename the entire data directory into the
non-volatile entry. If the staging area and store are on
different file-systems, or if the data is to be retained for
further operations, then the data directory needs to be copied.
Additionally, the cache maintains a size limit and discards any entries
if the limit is exceeded. Future extensions will implement cache pruning
if a configured watermark is reached, based on last-recently-used
logics.
Many more cache extensions are possible. This module introduces a first
draft of the most basic cache and hopefully lays ground for a new cache
infrastructure.
Lastly, note that this only introduces the utility helper. Further work
is required to hook it up with osbuild/objectstore.py.
Add a new utility that wraps ctypes.CDLL() for the self-embedded
libc.so. Initially, it only exposes renameat2(2), but more can be added
when needed in the future.
The Libc class is very similar to the existing LibCap class, with a
similar instantiation logic with singleton access.
In the future, the Libc class will allow access to other system calls
and libc.so functionality, when needed.
A new helper for the util.linux module which exposes the linux boot-id.
For security reasons, the boot-id is never exposed directly, but
instead only exposed through an application-id combined with the boot-id
via HMAC-SHA256.
Note that a raw kernel boot-id is always considered confidential, since
we never want an outside entity to deduce any information when they see
a boot-id used in protocol A and one in protocol B. It should not be
possible to tell whether both are from the same user and boot or not.
Hence, both should use their own boot-id namespace.
This adds a new accessor-function for the file-locking operations
through `fcntl(2)`. In particular, it adds the new function
`fcntl_flock()`, which wraps the `F_OFD_SETLK` command on `fcntl(2)`.
There were a few design considerations:
* The name `fcntl_flock` comes from the `struct flock` structure that
is the argument type of all file-locking syscalls. Furthermore, it
mirrors what the `fcntl` module already provides as a wrapper for
the classic file-locking syscall.
* The wrapper only exposes very limited access to the file-locking
commands. There already is `fcntl.fcntl()` and `fcntl.fcntl_flock()`
in the standard library, which expose the classic file-locks.
However, those are implemented in C, which gives much more freedom
and access to architecture dependent types and functions.
We do not have that freedom (see the in-code comments for the
things to consider when exposing more fcntl-locking features).
Hence, this only exposes a very limited set of functionality,
exactly the parts we need in the objectstore rework.
* We cannot use `fcntl.fcntl_flock()` from the standard library,
because we really want the `OFD` version. OFD stands for
`open-file-description`. These locks were introduced in 2014 to the
linux kernel and mirror what the non-OFD locks do, but bind the
locks to the file-description, rather than to a process. Therefore,
closing a file-description will release all held locks on that
file-description.
This is so much more convenient to work with, and much less
error-prone than the old-style locks. Hence, we really want these,
even if it means that we have to introduce this new helper.
* There is an open bug to add this to the python standard library:
https://bugs.python.org/issue22367
This is unresolved since 2014.
The implementation of the `fcntl_flock()` helper is straighforward and
should be easy to understand. However, the reasoning behind the design
decisions are not. Hence, the code contains a rather elaborate comment
explaining why it is done this way.
Lastly, this adds a small, but I think sufficient unit-test suite which
makes sure the API works as expected. It does not test for full
functionality of the underlying locking features, but that is not the
job of a wrapping layer, I think. But more tests can always be added.
This modification will allow a user to ask to mount the system as read
only for instance. Which would be super useful for image-info who is
progressively using more of OSbuild internals to mount partitions.
The `Object.{read,write}` methods were introduced to implement
copy on write support. Calling `write` would trigger the copy,
if the object had a `base`. Additionally, a level of indirection
was introduced via bind mounts, which allowed to hide the actual
path of the object in the store and make sure that `read` really
returned a read-only path.
Support for copy-on-write was recently removed[1], and thus the
need for the `read` and `write` methods. We lose the benefits
of the indirection, but they are not really needed: the path to
the object is not really hidden since one can always use the
`resolve_ref` method to obtain the actual store object path.
The read only property of build trees is ensured via read only
bind mounts in the build root.
Instead of using `read` and `write`, `Object` now gained a new
`tree` property that is the path to the objects tree and also
is implementing `__fspath__` and so behaves like an `os.PathLike`
object and can thus transparently be used in many places, like
e.g. `os.path.join` or `pathlib.Path`.
[1] 5346025031
Include the new journald config stage to configure journald to
persist the journal. This is needed since we don't create the
`/var/log/journal` directory that journald uses to switch the
default to persistent storage. But instead of creating that
directory, we explicitly configure journald via the new stage.
This is also what Fedora CoreOS does.