This adds a `osbuild --version` command that prints the current osbuild
version in use. Allows users to confirm their osbuild is up to date
enough to use newer features.
Now that assemblers are represented via the `Stage` class, the
Assembler class is not needed anymore. Adjust the monitor method
to take an `pipeline.Stage` for the `assembler` method as well.
The 'Manifest' class represents what to build and the necessary
sources to do so. For now thus it is just a combination of the
pipeline the source options.
Instead of importing the load, load_build functions into the osbuild
namespace and using it via that, use the load function via the module
that provides them, i.e. the formats.v1 module.
Extract the code that loads a pipeline from a pipeline description,
i.e. a manifest, into a new module inside a new 'formats' package.
The idea is to have different descriptions, i.e. different formats,
for the same internal representation. This allows changing the
internal representation, i.e. data structures, but still having the
same external description.
Later a new description might be added that better matches the new
internal representation.
This extracts the CLI entrypoint into `main_cli.py` and prepares the
codebase for the introduction of additional entrypoints. This should
not contain any functional changes.
The idea behind this is to add `main_api.py` (and maybe more in the
future), which will be similar to `main_cli.py` but contain the
`osbuild-api` entrypoint. This will make all entrypoints nicely symetric
and the only difference will be `setup.py` selecting the right
entrypoint for each executable, as well as `__main__.py` selecting the
entrypoint for the module itself (which we will keep to the CLI for
compatibility).
A pipeline run only returned logs in the `StageFailed` and
`AssemblerFailed` exceptions. Remove those and always return structured
data instead.
It only returns data for stages that actually ran (i.e., didn't come
from the cache). This is similar to the output in interactive mode.
Also change osbuildtest to be able to deal with output that is larger
than the pipe buffer by using subprocess.communicate().
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
Import modules between files using the syntax `from . import foobar`,
renaming what used to be `FooBar` to `foobar.FooBar` when moved to a
separate file.
In __init__.py only import what is meant to be public API.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make the order of argumnets in line with how it is used (and also
how it is conceptionally closer to the pipeline json document).
This makes no practical difference as the two arguments were both
just used for computing the hash.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Each pipeline is now self-contained without references to another.
However, as the final stage in a pipeline is saved to the content
store, we are able to reuse it if one pipeline is the prefix of
another, as described in the previous commit. This makes the
concept of a base redundant.
The ObjectStore must take a directory as argument, never None, so
the conditional assertion for this in Pipeline.run() is ok to
remove.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Don't do this only for the base, but for any prefix of the current
pipeline.
Note that if two pipelines share a prefix, but one is not the prefix
of another, no sharing is possible. Only a proper prefix can be
reused by another pipeline, as only the result of the last pipeline
is saved to the object store (this restriction could be changed in
the future).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Take this as an argumnet to __init__ in the same way that `base`
is.
This avoids us having to deal with the case of someone setting a
stage before the build, which does not work as the stage id will
be wrong.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Renaming a directory over an existing one is only an error if the
existing one is not empty, in which case ENOEMPTY is thrown.
Tested with:
>>> os.mkdir("foo")
>>> os.mkdir("bar")
>>> os.rename("foo", "bar")
# no error
>>> open("foo/a", "w").write("a")
1
>>> try: os.rename("bar", "foo")
... except OSError as e: e.errno == errno.ENOTEMPTY
...
True
The build pipeline, is a sub-pipeline used to generate the build
tree to use rather than the current root directory. This can be
nested arbitrarily deep, but ultimately we will fall back to the
current logic when no build property is found.
Just like the tree after the last stage of a regular pipeline ends
up in the object store, so does currently each build tree (as the
build sub-pipeline really is just a regular pipeline in its own
right). We may want to avoid both these instances of the implicit
storing semantics, and rather make it something the caller opts-in
to. However, for now that is left as a future optimization.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This also changes the structure of the object store, though the
basic idea is the same.
The object store contains a directory of objects, which are content
addressable filesystem trees. Currently we only ever use their
content-hash internally, but the idea for this is basically Lars
Karlitski and Kay Sievers' `treesum()`. We may exopse this in the
future.
Moreover, it contains a directory of refs, which are symlinks named
by the stage id they correspond to (as before), pointing to an object
generated from that stage-id.
The ObjectStore exposes three method:
`has_tree()`: This checks if the content store contains the given tree.
If so, we can rely on the tree remaining there.
`get_tree()`: This is meant to be used with a `with` block and yields
the path to a read-only instance of the tree with the given id. If the
tree_id is passed in as None, an empty directory is given instead.
`new_tree()`: This is meant to be used with a `with` block and yields
the path to a directory in which the tree by the given id should be
created. If a base_id is passed in, the tree is initialized with the
tree with the given id. Only when the block is exited successfully
is the tree written to the content store, referenced by the id in
question.
Use this in Pipeline.run() to avoid regenerating trees unneccessarily.
In order to trigger a regeneration, the content store must currently
be manually flushed.
Update the travis test to run the noop pipeline twice, verifying that
the stage is only run the first time.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than hard-coding this to /, let the caller provide the
directory path to use.
In the past, we needed to give special treatment to /, as it had
to be bind-mounted before being used by nspawn, to work around a
check they had, refusing to use the host root in the container.
We no longer pass the directory directly to nspawn, but rather
mount the subdirs we want ourselves, so that no longer applies.
The callers pass in /, so the behavior is unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We want the same functionality, but we now impleent it ourselves.
In addition to bind-mounting in /usr into the target container
(which is all nspawn does), we also add /bin, /sbin, /lib and
/lib64, if they exist and are not symlinks (presuambly into
/usr).
This means we can work on distros who have not implemented the
usr-move, like Ubuntu Bionic (used by Travis).
Signed-off-by: Tom Gundersen <teg@jklm.no>
The underlying filesystem was mounted in __init__ and unmonuted in
__exit__/__del__. This meant that if the same object was reused in
several `with` clauses, only the first one would work as intended.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Stop guessing if we're in the source directory by looking if a `stages`
subdirectory exists. Instead, assume that osbuild is installed on the
host.
If `--libdir` is given, mount the libdir into `/run/osbuild/lib` (alas,
we can't overwrite `/usr/libexec/osbuild`) and run osbuild from there.
Thus, running from source must now be done like this:
# python3 -m osbuild --libdir . [other args]