Implement `accept` and `listen`, that call the equivalent methods
on the underlying socket; this prepares the move to a connection
oriented socket, i.e. `SOCK_SEQPACKET`.
Add a new `blocking` property to get and set the blocking state
of the underlying socket. In Python this is tied to the timeout
setting of the `socket.Socket`, i.e. non-blocking means having
any timeout specified, including "0" for not waiting at all.
Blocking means having a timeout value of `None`.
The getter is emulating the logic of `Socket.getblocking`, which
was added in 3.7, and we need to stay compatible with 3.6.
The logic is implemented in `Modules/socketmodule.c` in Python.
This way the test can benefit from osbuild's internal cache:
The first subtest builds all the stages and runs the assembler
The next subtests can reuse the built stages and just run the assembler
Some data from my machine running the qemu test:
Building the manifest takes about 120 seconds
Running just the assembler on the cache's content takes 30 seconds.
Before this change, the whole manifest was built 3 times:
3 * 120 = 360 seconds
After this change, the whole manifest is built once and the cache
is reused 2 times:
1 * 120 + 2 * 30 = 180 seconds
Let the caller decide which executor instance should be used to build
the manifest. This change allows us to use osbuild's built-in cache
in the following commit.
FdSet does derive directly and only from `object`. Not specifying
any base classes is the same as specifying an empty list of base
classes; therefore get rid of the empty list.
Make sure file descriptors are never leaked by closing them after
the `_message` method invocation. Clients that want to hold on to
fds past the scope of the method should use `FdSet.steal` to
extract those.
Adapt the `LoopServer`'s `_message` implementation accordingly.
The `BuildRoot` wants to create temporary directories in two
locations, `rundir` (supplied as `path`) and `vardir`. Make
sure these directories exist before trying to create temporary
directories in them.
Now that all API providers are converted to use the high level
dispatcher, make the implementation of that mandatory by declaring
it an abstract method.
Availability of new incoming data is indicated to clients, i.e.
deriving classes, by invoking the `_dispatch` method, with the
`jsoncomm.Socket` as argument. All clients then need to call
`Socket.recv` to actually receive the data.
Provide a new high-level message dispatcher class by providing
a standard implementation of `_dispatch` in `BaseAPI` that calls
`socket.revc` and then invokes the new high level `_message`
method, with the data (`msg`), file descriptors (`fds`, if passed)
the socket (`sock`) and the peer address `addr`.
Rely on the ability of `BaseAPI` to auto-generate socket addresses
when no one was provided. The `BuildRoot` does not rely on the
sockets being created in the `BuildRoot.api` directory anymore and
will instead bind-mount each individual socket address to the well
known location via the `BaseAPI.endpoint` identifier.
Convert all API providers to take the `socket_address` as an
optional keyword argument.
Make the `socket_address` argument to `BaseAPI` optional, i.e.
allow it to be `None`. In that case, create a temporary directory
and place the socket, named with the value of `endpoint`, in that
directory. On context exit, the directory is cleaned up. As long
as the jsoncomm.Socket server is running, `socket_address` will
always be valid and indicating the address of the server.
Add support for `util.types.PathLike` paths for socket addresses,
instead of just plain strings. Test it by using pathlib.Path to
create the address in the corresponding test.
Add a simple new module meant to define types that are commonly
used throughout the code-base. For starters, define `PathLike`
meant to represent file system paths, i.e. strings, bytes, or
anything that provides the `os.PathLike` protocol, i.e. that
can be used with `os.fspath`.
The current way API end points, i.e. sockets for API providers,
are provided to the sandbox is via a temporary directory that
is created by `BuildRoot` which later gets bind-mounted to a well
known path, i.e. /run/osbuild/api inside the sandbox. API providers
are expected to create their socket in that temporary directory.
Now that `BuildRoot` has a `regsiter_api` method and each API has
an `endpoint` property, the socket of each API provider, no matter
where it is located, will get bind-mounted individually inside
the sandbox at /run/osbuild/api using the `endpoint` identifier.
For backwards compatibility reasons the temporary api directory
will still be created by `BuildRoot`, but it is no longer bind
mounted inside the container. This paves the way to remove that
directory completely once all API providers are converted to not
use that directory anymore.
Add a new abstract class property to `BaseAPI` called `endpoint`,
meant to be implemented by deriving classes in order to identify
the end point name for the API provider.
Implement the new property in all existing API providers.
Register all API end point providers with the `BuildRoot` via the
new `BuildRoot.register_api` call. The context management is now
done via the `BuildRoot` itself.
Add a new `register_api` method that is meant to be used by clients
to register API end point providers, i.e. instances of `api.BaseAPI`.
When the context of the `BuildRoot` is enter, all providers are
activated, i.e. their context is entered. In case `regsiter_api` is
called with an already active context, the provider will immediately
be activated. In both cases their lifetime is thus bound to the
context of the `BuildRoot`. This also means that they are cleaned-up
with the `BuildRoot`, i.e. when its context is exited.
The `api.API` provides a `setup-stdio` method, that is meant to
be used by clients to replace their stdio with the supplied fds
from the server. Provide a canonical `api.setup_stdio` method
that will do exactly that.
Split out the part of `api.API` that is responsible for providing
the server infrastructure for the API; i.e. setting up the server
and the corresponding context manager and asynchronous event
handling. This leaves `API` itself which just the implementation
of the high level protocol and makes the API-server part re-usable.
NB: pylint, for some reason, confuses `API` and `BaseAPI`, like in
`test_monitor`. Annotate that accordingly.
The `script` and `test` stages should not be used in produciton, and
their use should be discouraged in general. They may make sense for
debugging, but should not be shipped.
The test stage is still used by the boot tests, so leave that for now,
and only drop the scripts stage.
Signed-off-by: Tom Gundersen <teg@jlkm.no>
Create small test cases that check the execution of Stages and
Assembler. This ensure that path handling, the sandbox, as well
as basic result reporting works as expected.
Use `os.path.join` to build the path for the source cache, instead
of string interpolation. This makes it possible to use other Path
representations, like `pathlib.Path`, transparently.
Currently `objectstore.Object.{read, write}` directly return
strings but in the future they might return an Object that is
an `os.PathLike`, i.e. has a `__fspath__` method, instead.
Prepare for that by ensuring all `tree`s are converted to their
file system representation via `os.fspath` when needed, e.g.
when creating the bind-mount arguments for the `BuildRoot`.
Instead of using string interpolation and concatenation to build
file system paths, use `os.path.join` or directly the constructor
for `pathlib.Path`, which can take path segments.
Instead of using string interpolation, use `os.path.join` in all
places. This should allow the use of `os.PathLike` objects as well
as bytes (i.e. `objectstore.PathLike` types) to be used and is
generally cleaner.
Support `os.PathLike` arguments in `Object.export` by explicitly
converting the supplied argument via `os.fspath`. Additionally,
declare the support for those via the Python typing system with
a new Union type for general `PathLike` type, i.e. all valid
types for `os.fspath`, which are `str`, `bytes`, `os.PathLike`.
Instead of having a duplication of the invocation of `cp`, once in
`init`, once in `export`, re-use the latter in the former: the to
be copied object is accessed in the normal way via the store, and
then "exported" to the new location. This gets rid of the call to
resolve_ref as a nice side effect, which means less poking into
the internals of the store.
This swaps the `systemd-nspawn` implementation for `bubblewrap` to
contain sub-processes. It also adjusts the `BuildRoot` implementation
to reduce the number of mounts required to keep locally.
This has the following advantages:
* We know exactly how the build-root looks like. Only the bits and
pieces we select will end up in the build-root. We can let RPM
authors know what environment their post-install scripts need to
run in, and we can reliably test this.
* We no longer need any D-Bus access or access to other PID1
facilities. Bubblewrap allows us to execute from any environment,
including containers and sandboxes.
* Bubblewrap setup is significantly faster than nspawn. This is a
minor point though, since nspawn is still fast enough compared to
the operations we perform in the container.
* Bubblewrap does not require root.
At the same time, we have a bunch of downsides which might increase the
workload in the future:
* We now control the build-root, which also means we have to make sure
it works on all our supported architectures, all quirks are
included, and all required resources are accessible from within the
build-root.
The good thing here is that we have lots of previous-art we can
follow, and all the other ones just play whack-a-mole, so we can
join that fun.
The `bubblewrap` project is used by podman and flatpak, it is packaged
for all major distributions, and looks like a stable dependency.
Create a new monitor that records all the invocations of the
monitoring (virtual) functions and use that to check that when
running (i.e. building) a pipeline all of them are executed
the excepted number of times (and with the correct arguments).
Add a basic test that will set up an 'API' endpoint, then spawn a
child process that uses that 'API' endpoint to setup its stdio in
very much the same way as runners do. This is used to verify that
the API itself works properly as well as the new LogMonitor class
by comparing the inputs and outputs.
Introduce the concept of pipeline monitoring: A new monitor class is
passed to the pipeline.run() function. The main idea is to separate
the monitoring from the code that builds pipeline. Through the build
process various methods will be called on that object, representing
the different steps and their targets during the build process. This
can be used to fully stream the output of the various stages or just
indicate the start and finish of the individual stages.
This replaces the 'interactive' argument throughout the pipeline
code. The old interactive behavior is replicated via the new
`LogMonitor` class that logs the beginning of stages/assembler,
but also streams all the output of them to stdout.
The non-interactive behavior of not reporting anything is done by
using the `NullMonitor` class, which in turn outputs nothing.
Instead of using plain python strings and appending to them, use
'io.StringIO' which is a data structure meant to be used for i/o.
This should increase performance compared to plain strings.
Instead of either using a text file, in non-interactive mode, or
directly stdout otherwise, create a pipe and always use that as
for stdout/stderr when preparing the output for 'setup_stdio'.
This streamlines the two cases (interactive, non-interactive) and
as a result 'API.output' will always contain the full output data.
Close the event loop when the context is exited, which will clear
the internal queues and shut down the executor of the event loop.
Not doing this will create a warning when the object is garbage
collected.
Close the event loop when the context is exited, which will clear
the internal queues and shut down the executor of the event loop.
Not doing this will create a warning when the object is garbage
collected.
In three places we have more than 7 instances attributes, but less
then 10; instead of disabling the warning for all these cases,
increase the limit to a reasonable size of 10 and re-enable the
warnings in all the places.
#471 extends the assembler test suite to also test xfs and btrfs filesystems
in raw and qemu assemblers. However, this change leads to long running times
of this suite.
The running time of these test consist of 3 main steps:
1) Building the build pipeline
2) Building the stages
3) Running the assembler
There are two optimization approaches:
1) Caching
OSBuild supports caching, therefore it's possible to cache results of first
two steps.
2) Minimizing the operating system tree
Assemblers don't care about the image contents. Therefore, it's possible
to create just a small tree which would be used to test the assemblers.
This should lead to speed up in the step 2 (smaller tree should be built
quicker) and in step 3 (big part of assembling is just copying files over
to the image).
This commit implements the second approach. A new test manifest is now added,
which just installs the filesystem package and its dependencies and this tree
is then labeled. This solution was chosen, so that the assemblers get
something that looks as a proper filesystem tree but also can be built pretty
quickly.
Before this change, the test_rawfs method with #471 merged ran for 842 seconds.
After this change, it ran for 391 seconds.