Wrap all calls to the various setup functions in the new exception
handler provided by `osbuild.api`. This will make sure that any
exception is properly printed to stderr, as well as communicated
to osbuild in a structured and machine readable way.
Use `traceback.print_tb()` to serialize the exceptions' backtrace.
The previously used expression `str(e.__traceback__)` will just
give `<traceback object at 0x…>`, which is not very helpful.
Add a test to check that the method name that raises the exception,
also called `exception`, is in the traceback.
When using `str(type(exception))` this ends up to be something like
`<class 'ValueError'>` for a `ValueError` exception. Get the vanilla
name of the exception type via `type(exception).__name__`.
Add a test to ensure that we encode this properly.
Rename the `API.exception` member to `API.error`, to make it more
generic, so it can also be used for other sort of errors in the
future. Also add a layer of additional structure with `type` and
`data` members so different types of errors apart. Currently only
`exception` is used.
Adapt the tests in test/mod/test_api.py to check for the new
structure and its content.
The `api.exception` method, which is internally also used by
`api.exception_handler` helper, calls `sys.exit` and this no
statement after a call to either method should be reached.
Add an assertion to make sure of that.
The osbuild-composer-tests package recently started to list its actual
dependencies, which include packages from EPEL. Enable EPEL in
deploy.sh.
Based on this patch by Ondřej Budai <obudai@redhat.com>:
https://github.com/osbuild/osbuild-composer/pull/1022
Create a new api endpoint called exception, that communicates
exception backtraces separately back to osbuild, as opposed to
dumping them into the normal log. Additionally, add a corresponding
test to check that a call to api.exception correctly sets
API.exception.
Access metadata.api only after `api` has exited the context and
thus the event loop has stopped and all incoming messages, like
the one setting the metadata, have been processed.
See commit 803433fb62 for a lecture
about the internals and all the details involved.
Now that the `org.osbuild.linux` runner does not use `api.setup_stdio`
anymore, the output of the binary run from the BuildRoot must end up
in `BuildRoot.output`. Check for that.
Now that the BuildRoot is capable of capturing the output of the
runner and modules (stages, assemblers), there is no need for
using `api.setup_stdio`. Therefore, drop it from all runners and
replace `api.output` with `BuildRoot.output`, which will contain
the output if `api.setup_stdio` is not called from the runners.
Create a new CompletedBuild object that wraps and is very similar
to the subprocess.CompletedProcess, i.e. it has a process member
but also has shortcuts for returncode. Additionally, the output
of the process is not only forwarded to the monitor, but also
captured and then handed to CompletedBuild, so its output member
will actually contain the full build output. To be compatible
with the previously returned CompletedProcess, `stderr`, `stdout`
members exist on CompletedBuild that also return `output`.
Instead of reading the arguments from sys.stdin, which requires
that stdin is setup properly for that in the runner, use the new
api.arguments() method to directly fetch the arguments.
Also fix missing newlines between imports and methods to be more
PEP-8 complaint, where needed.
In case that bubblewrap fails to, e.g. because it fails to execute
the runner, it will print an error message to stderr. Currently,
this output is not capture and thus not logged. To fix that, the
`BuildRoot.run` method now takes a monitor object and will stream
stdout/stderr to the log via the monitor.
Instead of reading the arguments from sys.stdin, which requires
that stdin is setup properly for that in the runner, use the new
api.arguments() method to directly fetch the arguments.
Also fix missing newlines between imports and methods to be more
PEP-8 complaint, where needed.
Instead of reading the arguments from sys.stdin, which requires
that stdin is setup properly for that in the runner, use the new
api.arguments() method to directly fetch the arguments.
Also fix missing newlines between imports and methods to be more
PEP-8 complaint, where needed.
Simple check for the new server side method, `get-arguments`, and
client side counterpart, `api.arguments`, that compares that using
the later we get the supplied input (arguments) to API.
Add a new `get-arguments` API call to fetch the input/arguments.
To avoid running into any limitings on maximum package size on
the socket, the actual data is written to a temp file and a fd
to that passed to the client - very much as in `setup_stdio`.
Additionally, new `arguments` method is provided as a client
counterpart for the new API call.
Change the API endpoint to prevent retrieving monitor-output from a
running instance. Instead, we require the caller to exit the API context
before querying the monitor-output. This guarantees that the api-thread
was synchronously taken down and scheduled any outstanding events.
This fixes an issue where a side-channel notifies us of a buildroot
exit, but the api-thread has not yet returned from epoll, and thus might
not have dispatched pending I/O events, yet. If we instead wait for the
thread to exit, we have a synchronous shutdown and know that all
*ordered* kernel events must have been handled.
In particular, imagine a build-root program running (like `echo` in the
test_monitor unittest) which writes data to the stdout-pipe and then
immediately exits. The syscall-order guarantees that the data is written
to the pipe before the SIGCHLD is sent (or wait(2) returns). However, we
retrieve the SIGCHLD from our main-thread usually (p.join() in our test,
and BuildRoot() in our main code), while the pipe-reading is done from
an API thread. Therefore, we might end up handling the SIGCHLD first
(just imagine a single-threaded CPU that schedules the main task before
the thread). To avoid this race, we can simply synchronize with the
api-thread. Since we already have this synchronization as part of the
api-thread takedown, it is as simple as stopping the api-thread before
continuing with operations.
Lastly, if a write operation to a pipe was issued, we are guaranteed
that a SIGCHLD synchronization across processes is ordered correctly.
Furthermore, the python event-loop also guarantees that stopping an
event-loop will necessarily dispatch all outstanding events. A read is
guaranteed to be outstanding in our race-scenario, so the read will be
dispatched. The only possible problem is `_output_ready()` only
dispatching a maximum of 4096 bytes. This might need to be fixed
separately. A comment is left in place.
If the stage test folder contains a `metadata.json` file, it will
contain a dictionary where the keys are stage ids and the values
are dictionaries containing the metadata to verify. For each of
those the stage will be looked up in the pipeline result of 'b'
and verified that the metadata matches.
Generate and report metadata about all the packages that were
installed. This information will be needed by composer, especially
the 'sigmd5' bit, for integration with koji[1].
[1] https://docs.pagure.org/koji/content_generator_metadata/
Add support for setting metadata via `osbuild.API`. It is meant
to be used by modules (stages, assemblers) to pass additional data
that belong to the result back to osbuild. For this, a new api
method `set-metadata` can be used to set and update a metadata
dictionary on the `osbuild.API` class. A client side method
`metadata` is provided to do so.
This is a crucial pre-condition for the org.osbuild.selinux stage
to work properly, especially that it can set labels that are not
present in the policy on the host. If /sys/fs/selinux is writable,
setfiles will try to verify the labels via /sys/fs/selinux/context
and fail for unknown labels.
Add a new helper script to check if a mount / file-system was
mounted with specific flags. Currently only "ro", "nosuid",
"nodev" and "noexec" flags are supported. This script is in
test/data since it will be used from other tests and is itself
not a test per se.