Add the ability to opt out of preserving the ACLs, SELinux
contexts and extended attributes. It is opt out instead of
opt in since the assembler by default tries to preserve as
much as possible.
In the test case for the tar assembler, actually verify the
content by un-tar-ing the result again and comparing it to
the tree. This would have spotted missing SELinux labels.
All of these options, i.e. SELinux labels, ACLs and extended
attributes (xattrs), are opt-in and thus were currently ignored.
This lead to trees that had their selinux labels missing and
were thus incorrect.
Detect the case that one file has a SELinux label but the other
file does not have any label at all. This was currently not
possible to detect because both calls to get the labels were
wrapped in one try-except block and any failure in one of the
two calls, like a missing label, would lead to an early return
of the function, with a success value.
This source has been declared obsolete some time ago and is not
support anymore. We wont support it in the upcoming new manifest
format, therefore drop it now.
Include all inputs of a stage during the calculation of its id,
since they determine, very much like options, the content the
stage produces; thus different inputs should lead to different
ids.
Add an `id` property that, like `Stage.id`, can be used to uniquely
identify an input based on its name and options. Two stages with
the same name and options will have the same `id`.
Now that `Pipelines` have no assemblers anymore and thus only one
identifier, i.e. the one corresponding to the tree (`tree_id`),
the `id` and `tree_id` are now the same. Therefore replace the
usage of `tree_id` with `id` and drop the former. Add some extra
documentation including some caveats about the uniquness of `id`.
Convert the assembler phase of the main pipeline in the old format
into a new Pipeline that as the assembler as a stage, where the
input of that stage is the main pipeline. This removes the need of
having "assemblers" as special concepts and thus the corresponding
code in `Pipeline` is removed. The new assembler pipeline is marked
as exported, but the pipeline that builds the tree is not anymore.
Adapt the `describe` and `output` functions of the `v1` format to
handle the assembler pipeline. Also change the tests accordingly.
NB: The id reported for the assembler via `--inspect` and the result
will change as a result of this, since the assembler stage is now
the first and only stage of a new pipeline and thus has no base
anymore.
Since `__iter__` is return an iterator over the `Pipeline` objects,
the `"name" in manifest` check would not work for name or ids. Thus
provide an implemention of `__contains__` that does exactly that.
Add a new helper helper, `Manfifest.get` that will return a
pipeline give a name or an id or `None` if no pipeline could
be found with either. The implementation is taken from the
existing `__getitem__` method and the latter as now based on
the new `get` method.
Every pipeline that gets added to the `Manifest` now need to have
a unique name by which it can be identified. The version 1 format
loader is changed so that the main pipeline that builds the tree
is always called `tree`. The build pipeline for it will be called
`build` and further recursive build pipelines `build-build`, where
the number of repetitions of `build` corresponds to their level of
nesting. An assembler, if it exists, will be added as `assembler`.
The `Manifest.__getitem__` helper is changed so it will first try
to access pipeline via its name and then fall back to an id based
search. NB: in the degenrate case of multiple pipelines that have
exactly the same `id`, i.e. same stages, with the same options and
same build pipeline, only the first one will be return; but only
the first one here will be built as well, so this is in practice
not a problem.
The formatter uses this helper to get the tree pipeline via its
name wherever it is needed.
This also adds an `__iter__` method `Manifest` to ease iterating
over just the pipeline values, a la `for pipeline in manifet`.
Add a new `export` property to the `Pipeline` object that indicates
whether a the result, i.e. the tree after the pipelines has been
built, should be exported, i.e. copied to the output directory.
In the current format (v1), the main pipeline, gets marked as such
by the corresponding loader.
Instead of passing all pre-created pipelines to the Manifest
constructor, add a `add_pipeline` method, analogous to the
existing `Pipeline.add_{stage, assembler}` methods. Convert
the format loading code to use that and remove the constructor
parameter.
Add a support for developing osbuild via Visual Studio Code
Remote Containers[1]. VS Code, if the extension is installed,
will automatically build the specified container via the
supplied Dockerfile and start a a VS Code server instance in
it and connect to it.
The container has the dependencies to develop osbuild, which
includes running the test and building images. For this the
container is started with `--privileged` since osbuild needs
various capabilities.
NB: not all extensions that are installed on the host are
automatically installed; instead a handful of basic ones that
are needed for python development are pre-installed.
[1] https://code.visualstudio.com/docs/remote/containers
The current description test took a hand crafted Manifest. Since
the internal `Manifest` representation and the format (version 1)
representation are becoming more and more different, it might
soon not be possible to go from `Manifest` to the format specific
description unless it was previously loaded and thus an internal
mapping table can aid the serialization. Thus the description
test is re-worked to load a manifest and then describe it and
compare that the result is the same.
Also rename the test from `test_pipeline` to `test_describe`
as this is a better description of the test.
When the build fails, not all pipelines might have been built and
those pipelines will be missing from the results. Currently the
code assumes that all pipelines will have a result and this will
crash when trying to find an id for an pipeline that did not get
built.
The netns() function sets up a new namespace for tests. The function is
also used to determine whether those tests can be run (using
unittest.skipUnless()). A bug in the function made the changes stick if
the function failed early. Specifically, when the "ip link" line fails,
the function exits without reverting to the old namespace.
Since the code is used in "skipUnless()", it's run during test
collection, which means that even if the relevant tests aren't selected,
they affect the environment for other tests.
Now that assemblers are represented via the `Stage` class, the
Assembler class is not needed anymore. Adjust the monitor method
to take an `pipeline.Stage` for the `assembler` method as well.
Instead of using the `Assemblers` class to represent assemblers,
use the `Stage` class: The `Pipeline.add_assembler` method will
now instantiate and `Stage` instead of an `Assembler`. The tree
that the pipeline built is converted to an Input (while loading
the manifest description in `format/v1.py`) and all existing
assemblers are converted to use that input as the tree input.
The assembler run test is removed as the Assembler class itself
is not used (i.e. run) anymore.
Instead of creating the temporary directory for the BuildRoot and
the sources output directly at the store root, create them inside
the store's temporary directory.
Support for inputs. Before the stage is executed all inputs of the
stage are run. The returned path is mapped inside the sandbox and
pass, along with the returned data, as part of the arguments to the
stage via new "inputs" dictionary. They keys represent the input
keys as given in the manifest.
This represents the tree-type input, which is the simplest form
of input a stage can consume. For the stage it is just a file
system tree at a specific location and the contents itself is
mostly opaque. An obvious thing to do with such a content is
carbon copy it to a different location.
Currently this input only supports the pipelines as origins,
which are identified via their id.
Inputs are modules like Stages, Assemblers and Sources. Add them
as a new module klass to the various functions. Include them in
the schema test, so the schema of all inputs is validated.
Also sort the module classes alphabetically in the class mapping
and class list.
A pipeline input provides data in various forms to a `Stage`, like
files, OSTree commits or trees. The content can either be obtained
via a `Source` or have been built by a `Pipeline`. Thus an `Input`
is the bridge between various types of content that originate from
different types of sources.
The acceptable origin of the data is determined by the `Input`
itself. What types of input are allowed and required is determined
by the `Stage`.
To osbuild itself this is all transparent. The only data visible to
osbuild is the path. The input options are just passed to the
`Input` as is and the result is forwarded to the `Stage`.
The StoreServer and corresponding Client provide access to small
subset of the store methods to other process than the main osbuild one.
Currently it can be used to read trees of objects given their id and
create temporary directories within the store's tmp path.
The lifetime of the result of both operations are bound to the Server.
Now that the `Stage` contains the `ModuleInfo`, which contains the
path the to executable, this can directly be used to execute the
stage. To do so, the path to the executable is bind-mounted to a
well known path inside the sandbox (`/run/osbuild/bin/$id`) and
this is then supplied to the build root as executable to run.
Add a new `info` property that holds the `meta.ModuleInfo` info
for the stage. This gives each instance of a stage access to
meta (or class) information about it, i.e. its schema, docs but,
more importantly, also its name and path to the executable.
Thefore the `name` property is coverted into a transient property
which access the `name` member of `info`.
Change the `formats/v1` load mechanism to carry a new `index`
argument which is used to load the `ModuleInfo` for each stage.
Adapt all tests to load the info as well when creating stages.
Add the path to the executable for that module to the ModuleInfo.
This can then later be used to actually execute said module. The
information is already readily available since we used the path
to load the information from the file in the first place.
Instead of carrying around the `sources_options` parameter
through the recursive `load` and `load_build` calls, set
the sources options after loading has completed by iterating
through all stages of all pipelines.
All tests and invocations of `add_stage` actually pass a valid
options dictionary. Thefore move the `options` args before
the `sources` arg and remove the default value (`None`).
The `--inspect` option is documented in the osbuild usage, but not in
the man page. This makes it harder for new users to find it and use it
for figuring out stage IDs necessary for checkpoints.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the python3-iniparse package to f32-build.mpp.json, which is a dependency
of the new 'org.osbuild.rhsm' stage. This is needed in order to be able to
test the 'org.osbuild.rhsm' stage.
Regenerate affected manifests in test/data/manifest using 'make --always-make
test-data'.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add new org.osbuild.rhsm stage to configure to configure RHSM DNF
plugins. The stage currently supports only enabling / disabling the DNF
plugins. The stage's configuration schema allows extending it in the
future to configure other aspects of RHSM if needed.
The schema specifies each DNF plugin as an explicit object. The reason
is that although currently only setting of one common option (enabled)
is allowed, the 'subscription-manager' plugin's configuration actually
allows one additional plugin-specific option. The stage may support
setting it in the future, which will be easier with distinct objects for
each plugin.
Signed-off-by: Tomas Hozza <thozza@redhat.com>