Make sure "/sys/fs/selinux" is read-only, otherwise libselinux and
tools will assume that SELinux is available and active and in turn
use /sys/fs/selinux to e.g. verify the file systems labels; this
will then prevent setting unknown labels via `setfiles`.
Make the output_directory argument in Pipeline.assemble
and Assembler.run required. The qemu assembler assumes
it is passed in args and will crash without it. Making
it mandatory prevents this.
Allow a user to see the duration for each step in the osbuild pipeline.
This allows a user to optimize the build system for the best performance
and identify performance bottlenecks.
Signed-off-by: Major Hayden <major@redhat.com>
Change the default of libdir to /usr/lib/osbuild and
remove redundant logic. Additionally, change how the
python package is detected.
Instead of checking if libdir is None, check if
/usr/lib/osbuild is empty - i.e. if the user has specified
a different directory than the default.
Run the container in a new network namespace, to isolate the host's
network from that of the container. Stages, assemblers and the tools
they execute are not supposed to assume network access is available
and this isolation will make sure of that.
Now that jsoncomm.Socket is using a connection-oriented socket,
the destination in `socket.sendmsg` is ignored and thus can and
should be dropped from the `jsoncomm.Socket.send` method.
Adjust the tests accordingly.
Now that jsoncomm is using a connection oriented protocol, the
`addr` parameter is not needed[*] and can thus be removed from
the `BaseAPI._message` message dispatcher. Adapt all usages
of it, including the tests.
[*] sendmsg ignores the destination parameter for connection
oriented sockets.
Switch to use a connection oriented datagram based protocol, i.e.
`SOCK_SEQPACKET`, instead of `SOCK_DGRAM`. It sill preserves
message boundaries, but since it is connection oriented the client
nor the server do not need to specify the destination addresses
of the peer in sendmsg/recvmesg. Moreover, the host will be able
to send messages to the client, even if the latter is sandboxed
with a separate network namespace. In the `SOCK_DRAM` case the
auto-bound address of the client would not be visible to the host
and thus sending messages would to it would fail.
Adapt the jsoncomm tests as well as `BaseAPI`.
Implement `accept` and `listen`, that call the equivalent methods
on the underlying socket; this prepares the move to a connection
oriented socket, i.e. `SOCK_SEQPACKET`.
Add a new `blocking` property to get and set the blocking state
of the underlying socket. In Python this is tied to the timeout
setting of the `socket.Socket`, i.e. non-blocking means having
any timeout specified, including "0" for not waiting at all.
Blocking means having a timeout value of `None`.
The getter is emulating the logic of `Socket.getblocking`, which
was added in 3.7, and we need to stay compatible with 3.6.
The logic is implemented in `Modules/socketmodule.c` in Python.
FdSet does derive directly and only from `object`. Not specifying
any base classes is the same as specifying an empty list of base
classes; therefore get rid of the empty list.
Make sure file descriptors are never leaked by closing them after
the `_message` method invocation. Clients that want to hold on to
fds past the scope of the method should use `FdSet.steal` to
extract those.
Adapt the `LoopServer`'s `_message` implementation accordingly.
The `BuildRoot` wants to create temporary directories in two
locations, `rundir` (supplied as `path`) and `vardir`. Make
sure these directories exist before trying to create temporary
directories in them.
Now that all API providers are converted to use the high level
dispatcher, make the implementation of that mandatory by declaring
it an abstract method.
Availability of new incoming data is indicated to clients, i.e.
deriving classes, by invoking the `_dispatch` method, with the
`jsoncomm.Socket` as argument. All clients then need to call
`Socket.recv` to actually receive the data.
Provide a new high-level message dispatcher class by providing
a standard implementation of `_dispatch` in `BaseAPI` that calls
`socket.revc` and then invokes the new high level `_message`
method, with the data (`msg`), file descriptors (`fds`, if passed)
the socket (`sock`) and the peer address `addr`.
Rely on the ability of `BaseAPI` to auto-generate socket addresses
when no one was provided. The `BuildRoot` does not rely on the
sockets being created in the `BuildRoot.api` directory anymore and
will instead bind-mount each individual socket address to the well
known location via the `BaseAPI.endpoint` identifier.
Convert all API providers to take the `socket_address` as an
optional keyword argument.
Make the `socket_address` argument to `BaseAPI` optional, i.e.
allow it to be `None`. In that case, create a temporary directory
and place the socket, named with the value of `endpoint`, in that
directory. On context exit, the directory is cleaned up. As long
as the jsoncomm.Socket server is running, `socket_address` will
always be valid and indicating the address of the server.
Add support for `util.types.PathLike` paths for socket addresses,
instead of just plain strings. Test it by using pathlib.Path to
create the address in the corresponding test.
Add a simple new module meant to define types that are commonly
used throughout the code-base. For starters, define `PathLike`
meant to represent file system paths, i.e. strings, bytes, or
anything that provides the `os.PathLike` protocol, i.e. that
can be used with `os.fspath`.
The current way API end points, i.e. sockets for API providers,
are provided to the sandbox is via a temporary directory that
is created by `BuildRoot` which later gets bind-mounted to a well
known path, i.e. /run/osbuild/api inside the sandbox. API providers
are expected to create their socket in that temporary directory.
Now that `BuildRoot` has a `regsiter_api` method and each API has
an `endpoint` property, the socket of each API provider, no matter
where it is located, will get bind-mounted individually inside
the sandbox at /run/osbuild/api using the `endpoint` identifier.
For backwards compatibility reasons the temporary api directory
will still be created by `BuildRoot`, but it is no longer bind
mounted inside the container. This paves the way to remove that
directory completely once all API providers are converted to not
use that directory anymore.
Add a new abstract class property to `BaseAPI` called `endpoint`,
meant to be implemented by deriving classes in order to identify
the end point name for the API provider.
Implement the new property in all existing API providers.
Register all API end point providers with the `BuildRoot` via the
new `BuildRoot.register_api` call. The context management is now
done via the `BuildRoot` itself.
Add a new `register_api` method that is meant to be used by clients
to register API end point providers, i.e. instances of `api.BaseAPI`.
When the context of the `BuildRoot` is enter, all providers are
activated, i.e. their context is entered. In case `regsiter_api` is
called with an already active context, the provider will immediately
be activated. In both cases their lifetime is thus bound to the
context of the `BuildRoot`. This also means that they are cleaned-up
with the `BuildRoot`, i.e. when its context is exited.
The `api.API` provides a `setup-stdio` method, that is meant to
be used by clients to replace their stdio with the supplied fds
from the server. Provide a canonical `api.setup_stdio` method
that will do exactly that.
Split out the part of `api.API` that is responsible for providing
the server infrastructure for the API; i.e. setting up the server
and the corresponding context manager and asynchronous event
handling. This leaves `API` itself which just the implementation
of the high level protocol and makes the API-server part re-usable.
NB: pylint, for some reason, confuses `API` and `BaseAPI`, like in
`test_monitor`. Annotate that accordingly.
Use `os.path.join` to build the path for the source cache, instead
of string interpolation. This makes it possible to use other Path
representations, like `pathlib.Path`, transparently.
Currently `objectstore.Object.{read, write}` directly return
strings but in the future they might return an Object that is
an `os.PathLike`, i.e. has a `__fspath__` method, instead.
Prepare for that by ensuring all `tree`s are converted to their
file system representation via `os.fspath` when needed, e.g.
when creating the bind-mount arguments for the `BuildRoot`.
Instead of using string interpolation, use `os.path.join` in all
places. This should allow the use of `os.PathLike` objects as well
as bytes (i.e. `objectstore.PathLike` types) to be used and is
generally cleaner.
Support `os.PathLike` arguments in `Object.export` by explicitly
converting the supplied argument via `os.fspath`. Additionally,
declare the support for those via the Python typing system with
a new Union type for general `PathLike` type, i.e. all valid
types for `os.fspath`, which are `str`, `bytes`, `os.PathLike`.
Instead of having a duplication of the invocation of `cp`, once in
`init`, once in `export`, re-use the latter in the former: the to
be copied object is accessed in the normal way via the store, and
then "exported" to the new location. This gets rid of the call to
resolve_ref as a nice side effect, which means less poking into
the internals of the store.
This swaps the `systemd-nspawn` implementation for `bubblewrap` to
contain sub-processes. It also adjusts the `BuildRoot` implementation
to reduce the number of mounts required to keep locally.
This has the following advantages:
* We know exactly how the build-root looks like. Only the bits and
pieces we select will end up in the build-root. We can let RPM
authors know what environment their post-install scripts need to
run in, and we can reliably test this.
* We no longer need any D-Bus access or access to other PID1
facilities. Bubblewrap allows us to execute from any environment,
including containers and sandboxes.
* Bubblewrap setup is significantly faster than nspawn. This is a
minor point though, since nspawn is still fast enough compared to
the operations we perform in the container.
* Bubblewrap does not require root.
At the same time, we have a bunch of downsides which might increase the
workload in the future:
* We now control the build-root, which also means we have to make sure
it works on all our supported architectures, all quirks are
included, and all required resources are accessible from within the
build-root.
The good thing here is that we have lots of previous-art we can
follow, and all the other ones just play whack-a-mole, so we can
join that fun.
The `bubblewrap` project is used by podman and flatpak, it is packaged
for all major distributions, and looks like a stable dependency.
Introduce the concept of pipeline monitoring: A new monitor class is
passed to the pipeline.run() function. The main idea is to separate
the monitoring from the code that builds pipeline. Through the build
process various methods will be called on that object, representing
the different steps and their targets during the build process. This
can be used to fully stream the output of the various stages or just
indicate the start and finish of the individual stages.
This replaces the 'interactive' argument throughout the pipeline
code. The old interactive behavior is replicated via the new
`LogMonitor` class that logs the beginning of stages/assembler,
but also streams all the output of them to stdout.
The non-interactive behavior of not reporting anything is done by
using the `NullMonitor` class, which in turn outputs nothing.
Instead of using plain python strings and appending to them, use
'io.StringIO' which is a data structure meant to be used for i/o.
This should increase performance compared to plain strings.
Instead of either using a text file, in non-interactive mode, or
directly stdout otherwise, create a pipe and always use that as
for stdout/stderr when preparing the output for 'setup_stdio'.
This streamlines the two cases (interactive, non-interactive) and
as a result 'API.output' will always contain the full output data.
Close the event loop when the context is exited, which will clear
the internal queues and shut down the executor of the event loop.
Not doing this will create a warning when the object is garbage
collected.
Close the event loop when the context is exited, which will clear
the internal queues and shut down the executor of the event loop.
Not doing this will create a warning when the object is garbage
collected.
In three places we have more than 7 instances attributes, but less
then 10; instead of disabling the warning for all these cases,
increase the limit to a reasonable size of 10 and re-enable the
warnings in all the places.
If the user does not specify an output directory or checkpoints
to osbuild, exit successfully without building.
Previously, if a user did not include an output directory or
checkpoints, it would build the manifest and throw out the result.
Returning early will be clearer to the user and avoid wasting work.
Instead of having a another indirection via `main_cli`, directly
use `osbuild_cli` in as main function in `__main__.py`. Also use
that in as the entry point for the generated `osbuild` executable.
Change `osbuild_cli` to be self-contained, i.e. it directly uses
`sys.argv` and `sys.exit`.
The way secrets work has been changed via commit 372b117: instead
of passing them in via the command line, the information how to
obtain secrets are encoded along the sources themselves.
The only stage that still has support for the old style way is the
deprecated org.osbuild.dnf stage, which might be removed in the
near future.