Every pipeline that gets added to the `Manifest` now need to have
a unique name by which it can be identified. The version 1 format
loader is changed so that the main pipeline that builds the tree
is always called `tree`. The build pipeline for it will be called
`build` and further recursive build pipelines `build-build`, where
the number of repetitions of `build` corresponds to their level of
nesting. An assembler, if it exists, will be added as `assembler`.
The `Manifest.__getitem__` helper is changed so it will first try
to access pipeline via its name and then fall back to an id based
search. NB: in the degenrate case of multiple pipelines that have
exactly the same `id`, i.e. same stages, with the same options and
same build pipeline, only the first one will be return; but only
the first one here will be built as well, so this is in practice
not a problem.
The formatter uses this helper to get the tree pipeline via its
name wherever it is needed.
This also adds an `__iter__` method `Manifest` to ease iterating
over just the pipeline values, a la `for pipeline in manifet`.
Add a new `export` property to the `Pipeline` object that indicates
whether a the result, i.e. the tree after the pipelines has been
built, should be exported, i.e. copied to the output directory.
In the current format (v1), the main pipeline, gets marked as such
by the corresponding loader.
Instead of passing all pre-created pipelines to the Manifest
constructor, add a `add_pipeline` method, analogous to the
existing `Pipeline.add_{stage, assembler}` methods. Convert
the format loading code to use that and remove the constructor
parameter.
Add a support for developing osbuild via Visual Studio Code
Remote Containers[1]. VS Code, if the extension is installed,
will automatically build the specified container via the
supplied Dockerfile and start a a VS Code server instance in
it and connect to it.
The container has the dependencies to develop osbuild, which
includes running the test and building images. For this the
container is started with `--privileged` since osbuild needs
various capabilities.
NB: not all extensions that are installed on the host are
automatically installed; instead a handful of basic ones that
are needed for python development are pre-installed.
[1] https://code.visualstudio.com/docs/remote/containers
The current description test took a hand crafted Manifest. Since
the internal `Manifest` representation and the format (version 1)
representation are becoming more and more different, it might
soon not be possible to go from `Manifest` to the format specific
description unless it was previously loaded and thus an internal
mapping table can aid the serialization. Thus the description
test is re-worked to load a manifest and then describe it and
compare that the result is the same.
Also rename the test from `test_pipeline` to `test_describe`
as this is a better description of the test.
When the build fails, not all pipelines might have been built and
those pipelines will be missing from the results. Currently the
code assumes that all pipelines will have a result and this will
crash when trying to find an id for an pipeline that did not get
built.
The netns() function sets up a new namespace for tests. The function is
also used to determine whether those tests can be run (using
unittest.skipUnless()). A bug in the function made the changes stick if
the function failed early. Specifically, when the "ip link" line fails,
the function exits without reverting to the old namespace.
Since the code is used in "skipUnless()", it's run during test
collection, which means that even if the relevant tests aren't selected,
they affect the environment for other tests.
Now that assemblers are represented via the `Stage` class, the
Assembler class is not needed anymore. Adjust the monitor method
to take an `pipeline.Stage` for the `assembler` method as well.
Instead of using the `Assemblers` class to represent assemblers,
use the `Stage` class: The `Pipeline.add_assembler` method will
now instantiate and `Stage` instead of an `Assembler`. The tree
that the pipeline built is converted to an Input (while loading
the manifest description in `format/v1.py`) and all existing
assemblers are converted to use that input as the tree input.
The assembler run test is removed as the Assembler class itself
is not used (i.e. run) anymore.
Instead of creating the temporary directory for the BuildRoot and
the sources output directly at the store root, create them inside
the store's temporary directory.
Support for inputs. Before the stage is executed all inputs of the
stage are run. The returned path is mapped inside the sandbox and
pass, along with the returned data, as part of the arguments to the
stage via new "inputs" dictionary. They keys represent the input
keys as given in the manifest.
This represents the tree-type input, which is the simplest form
of input a stage can consume. For the stage it is just a file
system tree at a specific location and the contents itself is
mostly opaque. An obvious thing to do with such a content is
carbon copy it to a different location.
Currently this input only supports the pipelines as origins,
which are identified via their id.
Inputs are modules like Stages, Assemblers and Sources. Add them
as a new module klass to the various functions. Include them in
the schema test, so the schema of all inputs is validated.
Also sort the module classes alphabetically in the class mapping
and class list.
A pipeline input provides data in various forms to a `Stage`, like
files, OSTree commits or trees. The content can either be obtained
via a `Source` or have been built by a `Pipeline`. Thus an `Input`
is the bridge between various types of content that originate from
different types of sources.
The acceptable origin of the data is determined by the `Input`
itself. What types of input are allowed and required is determined
by the `Stage`.
To osbuild itself this is all transparent. The only data visible to
osbuild is the path. The input options are just passed to the
`Input` as is and the result is forwarded to the `Stage`.
The StoreServer and corresponding Client provide access to small
subset of the store methods to other process than the main osbuild one.
Currently it can be used to read trees of objects given their id and
create temporary directories within the store's tmp path.
The lifetime of the result of both operations are bound to the Server.
Now that the `Stage` contains the `ModuleInfo`, which contains the
path the to executable, this can directly be used to execute the
stage. To do so, the path to the executable is bind-mounted to a
well known path inside the sandbox (`/run/osbuild/bin/$id`) and
this is then supplied to the build root as executable to run.
Add a new `info` property that holds the `meta.ModuleInfo` info
for the stage. This gives each instance of a stage access to
meta (or class) information about it, i.e. its schema, docs but,
more importantly, also its name and path to the executable.
Thefore the `name` property is coverted into a transient property
which access the `name` member of `info`.
Change the `formats/v1` load mechanism to carry a new `index`
argument which is used to load the `ModuleInfo` for each stage.
Adapt all tests to load the info as well when creating stages.
Add the path to the executable for that module to the ModuleInfo.
This can then later be used to actually execute said module. The
information is already readily available since we used the path
to load the information from the file in the first place.
Instead of carrying around the `sources_options` parameter
through the recursive `load` and `load_build` calls, set
the sources options after loading has completed by iterating
through all stages of all pipelines.
All tests and invocations of `add_stage` actually pass a valid
options dictionary. Thefore move the `options` args before
the `sources` arg and remove the default value (`None`).
The `--inspect` option is documented in the osbuild usage, but not in
the man page. This makes it harder for new users to find it and use it
for figuring out stage IDs necessary for checkpoints.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the python3-iniparse package to f32-build.mpp.json, which is a dependency
of the new 'org.osbuild.rhsm' stage. This is needed in order to be able to
test the 'org.osbuild.rhsm' stage.
Regenerate affected manifests in test/data/manifest using 'make --always-make
test-data'.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add new org.osbuild.rhsm stage to configure to configure RHSM DNF
plugins. The stage currently supports only enabling / disabling the DNF
plugins. The stage's configuration schema allows extending it in the
future to configure other aspects of RHSM if needed.
The schema specifies each DNF plugin as an explicit object. The reason
is that although currently only setting of one common option (enabled)
is allowed, the 'subscription-manager' plugin's configuration actually
allows one additional plugin-specific option. The stage may support
setting it in the future, which will be easier with distinct objects for
each plugin.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
`make test-data` always regenerates test data, without the need to pass
the `--always-make` option to make.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Mark '$(TEST_MANIFESTS_GEN)' target as .PHONY.
Currently the `test-data` make target does not always trigger
regenerating of manifests used for testing various osbuild parts,
although it is marked as .PHONY. The reason that its dependency
'$(TEST_MANIFESTS_GEN)' is not marked as .PHONY and therefore it is not
run if the files already exist in repository.
Due to the above reason, CI is actually running `make --always-make
test-data` to always regenerate manifests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Instead of having build pipelines nested within the pipeline it is
the build pipeline for, the nested structure is transferred into a
flat list of pipelines. As a result the recursion is gone and all
the pipelines and trees are build one after the other. This is now
possible since floating objects are kept alive by the store itself
and all trees that are being built are transparently via them.
The immediate result dictionary changed accordingly. To keep the
JSON output of osbuild the same, the result is now routed through
a format specific converter.
Additionally, the v1 format module gained a function to retrieve
the global tree_id and output_id. With the new models those global
ids will go away eventually and thus need to go through the format
specific code.
This is a step towards generic pipelines, i.e. replacing assemblers
with pipelines, thus creating an acyclic graph of pipelines. There
the pipeline id will be what is now the tree_id. For now though the
generic id is either the output_id or the tree_id.
The objectstore always tracked all objects that were returned from
it, but it did so via weak references, which means it did not keep
the objects alive itself. With the introduction of identifiers for
temporary objects (floating objects), it makes sense to keep all
created objects alive so that they can in fact be used.
A "floating" object is a temporary object that is identified, i.e.
has an `id` and is thus also locked, but is not committed to the
store.
The `contains` and `get` methods of ObjectStore will now return such
floating objects as if they were committed ones, provind transparent
access to object that have been built during the exectuin of osbuild.
The current pipeline code used to set a base for a tree object
that might or might not exist. Depending on it it would either
use that object or reset its base. Avoid doing that because it
prohibits us from properly interpreting the `id` of an object
if the latter is also set when `base_id` is assigned, since
that base might not exist and thus the `id` would not actually
mean that the the contents of tree associated with the object.
Therefore we use `ObjectStore.get` and return the result if it
is not None or a fresh Object otherwise.
Every time a stage has been successfully built, the contents of
the tree now corresponds to the stage and can thus be identified
via the id of the stage.
When the tree is being written to, i.e. on consecutive attempts
of stage builds, the `id` of the tree object will automatically
be reset.
This adds a new `id` property to the ObjectStore.Object, that is
meant to reflect the identifer of the Stage to build the contents
of it. This will help to transparently access objects that have
been built but not committed to the store.
Setting the `base_id` of an object will also set its `id`. When
the object is then modified via write() the `id` will be set to
None, since no the content and the id are out of sync. In the
same way, restting an object will reset its `id` to None.
The object in question will be cleaned when the store goes out of
context, which happens soon after the manual cleanup anyway and
the eager cleanup does not gain us much.
More importantly, it removes the special case for the assembler
output object, since trees build by the stages are not cleaned
up manually already.
Instead of passing the store directory to Pipeline.run, pass an
already initialized ObjectStore object. This binds the lifetime
of the store and its (temporary) objects to the run of osbuild
not the run of the pipeline.
This prepares re-using the stores with multiple (non-nested)
pipelines.
Instead of a pipeline, describe now takes a Manifest instance.
The reason is that a manifest fully describes the build, which
includes the sources. Now that the describe function takes the
manifest, the sources can be included as well.
Adapt the tests to refelect that change.
The 'Manifest' class represents what to build and the necessary
sources to do so. For now thus it is just a combination of the
pipeline the source options.