Since we support python3.6 we cannot assume that dicts are ordered
in any way. To ensure the `id` is still always valid we pass
sort_keys=True to json.dump().
Thanks to Simon!
Generate log messages with origin "org.osbuild.main" when
pipelines/stages start and finish. This way a higher level
frontend can display high level progress coming from this
origin and filter out e.g. stages based log messages (that
are usually quite technical as they are just stdout/stderr
from the stages).
Tweak the Progress class to be simpler. Given that progress does
not need to support arbitrary depth but only has a single level
the class now just exposes "sub_progress" to the caller.
When the main progress is advanced the sub_progress is now fully
deleted instead of just reset. The rational is that when the main
progress is done and advances a step it is very likely that a
new sub_progress is required and it's most likely an error if
the same sub_progress will get re-used.
This means that `reset()` can be removed as it's not used anymore
(and YAGNI). We can add it back when we have a use-case.
It also change the code so that "total" starts with 0 instead
of `None` (principle of least surprise). This means that now
`progress.incr()` is called in the JSONSeqMonitor() for
`finish()` and `result()` to indicate that the pipeline/stage
is finished.
This commit tweaks Context a bit so that any write will automatically
reset the `_id`. This ensures that we do not forget to reset `_id`
when the code changes.
It also tweaks the naming a bit, before there was a "setter" for
origin and functions to set "pipeline" and "stage". They are all
functions now with a "set_" prefix for symetry mostly.
The class LogLine() is purely used as a dataclass with no state
and the only function on it is `as_dict()`. This got refactored
into a new function `log_entry()` because there is no need for
this to be a class. The function that takes the same inputs.
Use the new `Index.detect_runner` method that will give us the best
available runner for a requested one. To do so a new `pipeline.Runner`
class is introduced that stores the `meta.RunnerInfo` class for the
specific runner and the original name that was requested.
In the manifest loading and describing functions of the formats, use
`Index.detect_runner` to get the `RunnerInfo` for a requested runner
and then wrap it in a `pipeline.Runner` object, which is then passed
to the `Manifest.add_pipeline` method.
See also commit "meta: ability to auto-detect runner".
Adjust all test.
When building a version 1 manifest, the assembler would always be
exported, even when not requested via the `--export` command line
option. This was done for backwards compatibility so to not break
tools relying on that behavior. The problem is that support for
this uses a completely different code path and might also now be
confusing behavior. Thus remove the implicit and really only ever
export what was explicitly requested by the caller.
Convert the assembler phase of the main pipeline in the old format
into a new Pipeline that as the assembler as a stage, where the
input of that stage is the main pipeline. This removes the need of
having "assemblers" as special concepts and thus the corresponding
code in `Pipeline` is removed. The new assembler pipeline is marked
as exported, but the pipeline that builds the tree is not anymore.
Adapt the `describe` and `output` functions of the `v1` format to
handle the assembler pipeline. Also change the tests accordingly.
NB: The id reported for the assembler via `--inspect` and the result
will change as a result of this, since the assembler stage is now
the first and only stage of a new pipeline and thus has no base
anymore.
Every pipeline that gets added to the `Manifest` now need to have
a unique name by which it can be identified. The version 1 format
loader is changed so that the main pipeline that builds the tree
is always called `tree`. The build pipeline for it will be called
`build` and further recursive build pipelines `build-build`, where
the number of repetitions of `build` corresponds to their level of
nesting. An assembler, if it exists, will be added as `assembler`.
The `Manifest.__getitem__` helper is changed so it will first try
to access pipeline via its name and then fall back to an id based
search. NB: in the degenrate case of multiple pipelines that have
exactly the same `id`, i.e. same stages, with the same options and
same build pipeline, only the first one will be return; but only
the first one here will be built as well, so this is in practice
not a problem.
The formatter uses this helper to get the tree pipeline via its
name wherever it is needed.
This also adds an `__iter__` method `Manifest` to ease iterating
over just the pipeline values, a la `for pipeline in manifet`.
Now that assemblers are represented via the `Stage` class, the
Assembler class is not needed anymore. Adjust the monitor method
to take an `pipeline.Stage` for the `assembler` method as well.
Instead of using the `Assemblers` class to represent assemblers,
use the `Stage` class: The `Pipeline.add_assembler` method will
now instantiate and `Stage` instead of an `Assembler`. The tree
that the pipeline built is converted to an Input (while loading
the manifest description in `format/v1.py`) and all existing
assemblers are converted to use that input as the tree input.
The assembler run test is removed as the Assembler class itself
is not used (i.e. run) anymore.
Add a new `info` property that holds the `meta.ModuleInfo` info
for the stage. This gives each instance of a stage access to
meta (or class) information about it, i.e. its schema, docs but,
more importantly, also its name and path to the executable.
Thefore the `name` property is coverted into a transient property
which access the `name` member of `info`.
Change the `formats/v1` load mechanism to carry a new `index`
argument which is used to load the `ModuleInfo` for each stage.
Adapt all tests to load the info as well when creating stages.
All tests and invocations of `add_stage` actually pass a valid
options dictionary. Thefore move the `options` args before
the `sources` arg and remove the default value (`None`).
Instead of passing the store directory to Pipeline.run, pass an
already initialized ObjectStore object. This binds the lifetime
of the store and its (temporary) objects to the run of osbuild
not the run of the pipeline.
This prepares re-using the stores with multiple (non-nested)
pipelines.
Instead of hard-coding the use of the "org.osbuild.linux" runner,
use the new `osbuild.pipeline.detect_host_runner` function to
dynamically detect the runner for the host system. That should fix
the tests on RHEL systems, where python3 is by default not present
and even if it is manually installed, is an indirection via
alternatives (i.e. a link to /etc/alternatives), which must be
explicitly configured in the build root container for the host.
Change the API endpoint to prevent retrieving monitor-output from a
running instance. Instead, we require the caller to exit the API context
before querying the monitor-output. This guarantees that the api-thread
was synchronously taken down and scheduled any outstanding events.
This fixes an issue where a side-channel notifies us of a buildroot
exit, but the api-thread has not yet returned from epoll, and thus might
not have dispatched pending I/O events, yet. If we instead wait for the
thread to exit, we have a synchronous shutdown and know that all
*ordered* kernel events must have been handled.
In particular, imagine a build-root program running (like `echo` in the
test_monitor unittest) which writes data to the stdout-pipe and then
immediately exits. The syscall-order guarantees that the data is written
to the pipe before the SIGCHLD is sent (or wait(2) returns). However, we
retrieve the SIGCHLD from our main-thread usually (p.join() in our test,
and BuildRoot() in our main code), while the pipe-reading is done from
an API thread. Therefore, we might end up handling the SIGCHLD first
(just imagine a single-threaded CPU that schedules the main task before
the thread). To avoid this race, we can simply synchronize with the
api-thread. Since we already have this synchronization as part of the
api-thread takedown, it is as simple as stopping the api-thread before
continuing with operations.
Lastly, if a write operation to a pipe was issued, we are guaranteed
that a SIGCHLD synchronization across processes is ordered correctly.
Furthermore, the python event-loop also guarantees that stopping an
event-loop will necessarily dispatch all outstanding events. A read is
guaranteed to be outstanding in our race-scenario, so the read will be
dispatched. The only possible problem is `_output_ready()` only
dispatching a maximum of 4096 bytes. This might need to be fixed
separately. A comment is left in place.
Rely on the ability of `BaseAPI` to auto-generate socket addresses
when no one was provided. The `BuildRoot` does not rely on the
sockets being created in the `BuildRoot.api` directory anymore and
will instead bind-mount each individual socket address to the well
known location via the `BaseAPI.endpoint` identifier.
Convert all API providers to take the `socket_address` as an
optional keyword argument.
Split out the part of `api.API` that is responsible for providing
the server infrastructure for the API; i.e. setting up the server
and the corresponding context manager and asynchronous event
handling. This leaves `API` itself which just the implementation
of the high level protocol and makes the API-server part re-usable.
NB: pylint, for some reason, confuses `API` and `BaseAPI`, like in
`test_monitor`. Annotate that accordingly.
Create a new monitor that records all the invocations of the
monitoring (virtual) functions and use that to check that when
running (i.e. building) a pipeline all of them are executed
the excepted number of times (and with the correct arguments).
Add a basic test that will set up an 'API' endpoint, then spawn a
child process that uses that 'API' endpoint to setup its stdio in
very much the same way as runners do. This is used to verify that
the API itself works properly as well as the new LogMonitor class
by comparing the inputs and outputs.