A new host service that provides device functionality to stages.
Since stages run in a container and are restricted from creating
device nodes, all device handling is done in the main osbuild
process. Currently this is done with the help of APIs and RPC,
e.g. `LoopServer`. Device host services on the other hand allow
declaring devices in the manifest itself and then osbuild will
prepare all devices before running the stage. One desired effect
is that it makes device handling transparent to the stages, e.g.
they don't have to know about loopback devices, LVM or LUKS.
Another result is that specific device handling is now modular
like Inputs and Source are and thus moved out of osbuild itself.
Create a `InputService` class with an abstract method called `map`,
meant to be implemented by all inputs. An `unmap` method may be
optionally overridden by inputs to cleanup resources.
Instantiate a `host.ServiceManager` in the `Stage.run` section and
pass the to the host side input code so it can be used to spawn the
input services.
Convert all existing inputs to the new service framework.
Instead of bind-mounting each individual input into the container,
create a temporary directory that is used by all inputs and bind-
mount this to the well known location ("/run/osbuild/inputs"). The
temporary directory is then passed to the input so that it can
make the requested resources available relative to that directory.
This is enforced by the common input handling code.
Additionally, pass the well known input path via a new "paths" key
to the arguments dictionary passed to the stage.
This helper property is misleading since it is not the name of the
input in the context of the manifest, but actually "type". Name is
a left-over from the nomenclature of format v1, where the type of
stages and inputs was called `name`.
Resolve relative paths for items the `api.arguments` call: Since paths
are different on the host and in the container, they can be transmitted
relative. Resolve the items for all groups that have paths registered.
Instead if using `check=True` for `subprocess.run`, which turns
a process failure (i.e. non-zero return codes) into generic a
`CalledProcessError` exception, use `check=False` and explicitly
handle mount errors, translating them into a `RuntimeError` with
a better error message.
Implement a new `read_at` method that will bind mount the tree of the
object to a specified location, instead of a temporary directory as
it done in the `read` method. Implement the latter via `read_at`.
Implement the corresponding methods for `Store{Client,Server}`. Since
the `ObjectStore.read_at` method will fail if the target directory
does not exist (or is of the wrong type), catch any exceptions in
the `StoreServer` and send those to the `StoreClient` via an `error`
entry.
This one is for David: also fix a missing blank line.
This disables buffering for the standard output stream for python
executables spawn within the build root. This should help with
the ordering of text output in stages: when stdout is buffered,
debug messages via `print` will be end up in that buffer. When
executables are run in the stage, via `subprocess.run` their
stdout has its own buffering, which will be flushed at the end
of the run. If stdout was not manually flushed before invoking
the executable, the output of the tool will be emitted before
anything in the buffer. For example:
print("stage")
subprocess.run(["echo", "tool"])
Will lead have the following ordering:
"tool"
"stage"
To avoid this, without having to manually flush the stdout
buffer before every `subprocess.run`, disable buffering for
python binaries run inside the build root.
Since low-level primitives (os.read) are used to read from the stdout
pipe, manual text decoding was necessary there anyway. The `encoding`
argument meant that we could forgo the manual decoding for the call
to `communicate`. But this meant that text handling is not uniform.
Therefore, remove the `encoding` argument from the `Popen` call and
manual decode all the text.
Host services are a way to provide functionality to stages that is
restricted to the host and not directly available in the container,
such as providing input to stages, devices access and mounting.
This commit introduces a `ServiceManager` class that can be used to
start and (automatically) stop host service, as well as a `Service`
base class together with a `ServiceClient` class that be used to
implement host services and communicate with them. Refer to the doc
string of the module for more information.
Often, a message is being sent and followed by a call to `recv`
to wait for a reply. Create a simple helper `send_and_recv` that
does both in one method.
Add a simple check for that helper to the tests.
Add a new constructor method that allows creating a `Socket` from
an existing file-descriptor of a socket. This might be need when
the socket was passed to a child process.
Add a simple test for the new constructor method.
Add a new constructor method, `Socket.new_pair`, to create a pair
of connected sockets (via `socketpair`) and wrap both sides via
`jsoncomm.Socket`.
Add a simple test to check it.
For schema version 2 of modules, the `definitions` node, as defined in
the module itself, won't be at the `options` level but at the level of
the `properties` node. Look for a `definitions` at that `properties`
level and move it to the top, if found.
Work around a bug on aarch64[1] where `qemu-img` would hang
about a third of the time when converting images. To be able
to activate the work-around based on the environment, i.e.
only on certain distributions, introduce an environment
variable, `OSBUILD_QEMU_IMG_COROUTINES`, that is set in the
runner and then picked up in the assembler.
[1] https://bugs.launchpad.net/qemu/+bug/1805256
When new ostree related stages and the new ostree input was added
they were included in the main package since all the modules were
manually listed in the corresponding exclude/include sections.
Change that by using wildcards, since all ostree related modules
should start with the org.osbuild.ostree* pattern.
When parsing the module file, parse the JSON directly from the AST
node, because the AST node contains the line number of the schema
in the module and thus we can resolve the correct line number for
errors within the JSON. Convert the `JSONDecodeError` to a
`SyntaxError` which results in an overall better exception message:
Before:
Traceback (most recent call last):
File "/workspaces/osbuild/osbuild/meta.py", line 331, in get_schema
opts = self._make_options(version)
[...]
File "/usr/lib64/python3.9/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in
double quotes: line 2 column 1 (char 14)
After:
Traceback (most recent call last):
File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
[...]
raise SyntaxError(msg, detail) from None
File "stages/org.osbuild.ostree.init-fs", line 31
additionalProperties: False
^
SyntaxError: Invalid schema: Expecting property name enclosed in ...
Define the mapping of modules and their paths at the `ModuleInfo` class
level instead of having it inline in a function. This makes it possible
to use it from other places in the code.
Disable host-only mode when running dracut and generate reproducible
images by default.
Suggested-by: gicmo
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move from using 'zram' to 'zram-generator-defaults' in the ostree bootiso
testing manifest. More information is available in Fedora 33 Change
document [1].
Add org.osbuild.kernel-cmdline stage to fedora-boot.json manifest
because of change in how grub handles the kernel command line arguments
[2].
GRUB2 Stage 2 checksums in assemblers test are updated. The change have
been verified by building the fedora-boot.json manifest with each checked
filesystem and booting the image in QEMU with legacy mode.
[1] https://fedoraproject.org/wiki/Changes/SwapOnZRAM
[2] https://github.com/osbuild/osbuild-composer/pull/982#issuecomment-697356929
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The previous version covered too few use cases, more specifically a
single subscription. That is of course not the case for many hosts, so
osbuild needs to understand subscriptions.
When running org.osbuild.curl source, read the
/etc/yum.repos.d/redhat.repo file and load the system subscriptions from
there. While processing each url, guess which subscription is tied to
the url and use the CA certificate, client certificate, and client key
associated with this subscription. It must be done this way because the
depsolving and fetching of RPMs may be performed on different hosts and
the subscription credentials are different in such case.
More detailed description of why this approach was chosen is available
in osbuild-composer git: https://github.com/osbuild/osbuild-composer/pull/1405
RHEL 8.4 is now GA, so we don't need any extra tests for it. This should also
make the CI more reliable because having two distros with the same DISTRO_CODE
caused some tests to fail randomly (they used the same intermediate
artifacts).
Define a set of pre-defined ostree related annotations that can
and should be used to indicate that a container image contains
an OSTree commit. This can be used by other tools to inspect and
extract the commit more easily.
Add support for arbitrary manifest annotations: allow anything
with the exception of the `org.osbuild` and `org.opencontainer`
prefixes. The former is reserved by us, the latter by the OCI
image specification. The latter specifies a set of pre defined
keys, which are not yet supported by osbuild but will be in the
future, partly via more generic options (creation time).
Add a simple tool that will spit out a valid org.osbuild.inline
source entry that encodes the given file. Currently always uses
base64 as encoding and sha256 for the hashing.
Add a new source for transporting binary data within the source
entry itself. The data is ascii encoded in the `data` property
of the inline source item, with the encoding that is used being
specified in the `encoding` property.
Now that there is a common utility function to verify the checksum
of a file, use that.
Also fix the json schema entry for the property to have to correct
minium and maximum digest length, given the supported algorithm,
which is 32 (md5) and 128 (sha512) characters.
Test that `checksum.verify_file` works correctly, which internally
uses the only other utility function `checksum.hexdigest_file`.
Check all algorithms currently supported by the `org.osbuild.curl`
source.
Create a setup.cfg file and move the pylint settings from .pylintrc
here. This file can be used to configure other tooling as well, so
it is in general more useful as a central configuration place.
Add basic checks for the ostree source, which includes a successful
pull of a commit, an empty source entry and one where the specified
commit is non-existant. For this create a simple commit in a ostree
repo is checked in. The commit was created via:
mkdir "/tmp/data"
echo "Hello World" > /tmp/data/hello.txt
ostree init --repo test/data/sources/org.osbuild.ostree/data/repo \
--mode=archive
ostree commit --repo test/data/sources/org.osbuild.ostree/data/ \
--branch "test/ostree" /tmp/data \
--timestamp="1995-05-13 12:34:56 +0000"
This should give an commit with the following commit id:
d6243b0d0ca3dc2aaef2e0eb3e9f1f4836512c2921007f124b285f7c466464d8
Since the `sources.SourcesServer` has been removed, nothing is
using the export functionality anymore. Inputs are now used to
make content in the store available to stages. Remove all the
export logic from org.osbuild.ostree.