This might (hopefully) fix a race in destructing the asyncio.EventLoop
that's used in all API classes, which leads to warnings about unhandled
exceptions on CI.
This also puts their creation closer to where the client-side sockets
are created.
Pipelines encode which source content they need in the form of
repository metadata checksums (or rpm checksums). In addition, they
encode where they fetch that source content from in the form of URLs.
This is overly specific and doesn't have to be in the pipeline's hash:
the checksum is enough to specify an image.
In practice, this precluded using alternative ways of getting at source
packages, such as local mirrors, which could speed up development.
Introduce a new osbuild API: sources. With it, a stage can query for a
way to fetch source content based on checksums.
The first such source is `org.osbuild.dnf`, which returns repository
configuration for a metadata checksum. Note that the dnf stage continues
to verify that the content it received matches the checksum it expects.
Sources are implemented as programs, living in a `sources` directory.
They are run on the host (i.e., uncontained) right now. Each source gets
passed options, which are taken from a new command line argument to
osbuild, and an array of checksums for which to return content.
This API is only available to stages right now.
The recent changes removed the {Assembler,Stage}Failed exceptions,
which includes them being thrown from Stage.run and Assembler.run.
Instead result dictionaries are returned even on errors. But the
object store, used as a context manager, relies on exceptions to
detect the error case and thus needs them to cleanup the temporary
objects. Without those exceptions the temporary objects end up in
the store even when the sage or assembler failed.
Restore the old behavior by throwing a generic BuildError exception
from the Stage and Assembler, which will be caught directly in the
pipeline and converted to a result dict.
The socket that the osbuild and loop apis should talk on are passed into
their `__init__` function. The caller should be responsible for closing
those sockets.
This already happens in all current callers.
This fixes a non-fatal error on RHEL's python 3.6, because it was
calling `socket.close` on an already-closed socket:
Traceback (most recent call last):
File "/usr/lib64/python3.6/asyncio/base_events.py", line 529, in __del__
self.close()
File "/usr/lib64/python3.6/asyncio/unix_events.py", line 63, in close
super().close()
File "/usr/lib64/python3.6/asyncio/selector_events.py", line 99, in close
self._close_self_pipe()
File "/usr/lib64/python3.6/asyncio/selector_events.py", line 109, in _close_self_pipe
self._remove_reader(self._ssock.fileno())
File "/usr/lib64/python3.6/asyncio/selector_events.py", line 268, in _remove_reader
key = self._selector.get_key(fd)
File "/usr/lib64/python3.6/selectors.py", line 189, in get_key
return mapping[fileobj]
File "/usr/lib64/python3.6/selectors.py", line 70, in __getitem__
fd = self._selector._fileobj_lookup(fileobj)
File "/usr/lib64/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib64/python3.6/selectors.py", line 41, in _fileobj_to_fd
raise ValueError("Invalid file descriptor: {}".format(fd))
ValueError: Invalid file descriptor: -1
Commit 82a2be53d introduced a new return type from `Pipeline.run()`. It
changed the caller in `__main__.py`, but missed that the build pipeline
uses the same function.
A pipeline run only returned logs in the `StageFailed` and
`AssemblerFailed` exceptions. Remove those and always return structured
data instead.
It only returns data for stages that actually ran (i.e., didn't come
from the cache). This is similar to the output in interactive mode.
Also change osbuildtest to be able to deal with output that is larger
than the pipe buffer by using subprocess.communicate().
The workaround of manually linking /lib64 -> /usr/lib64 inside the
container that is needed on s390 is also required on ppc64 because
here the dynamic linker is set to /lib64/ld64.so.2 and the /lib64
link is not created.
Work around a combination of systemd not creating the link from
/lib64 -> /usr/lib64 (see systemd issue #14311) and the dynamic
linker is being set to (/lib/ld64.so.1 -> /lib64/ld64.so.1)
Therefore we manually create the link before calling nspawn
osbuild currently throws an error when not passing a build environment
on the command line, because the runner is unset. This is annoying on
hosts which only need a runner set, but no build pipeline.
To simplify running osbuild in this common case, introduce
`org.osbuild.host`, which is a runner that is defined to work on the
host that osbuild is installed on. Use this runner by default and
include a symlink to the right runner in the Fedora and RHEL packages.
Also add `runners/org.osbuild.host` to `.gitignore`, so that developers
can set the symlink when running osbuild from the source directory.
Fixes#171
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
`osbuild-run` sets up the build root so that programs can be run
correctly in it. It should be run for all programs, not just stages and
assemblers (even though they're the only consumers right now).
Also, conceptually, `osbuild-run` belongs to the build root. We'll
change its implementation based on the build root in a future commit.
The buildroot already sets up `/run/osbuild/api`. It makes sense to have
it manage libdir as well.
A nice side benefit of this is a simplification of the Stage and
Assembler classes, which grew quite complex and contained duplicate
code.
Use the new the osbuild API to setup the standard input/output
inside the container, i.e. replace stdin, stdout, and stderr with
sockets provided by the host.
Introduce an osbuild API that can be used by the container to talk
to the osbuild host. It currently supports one method 'setup-stdio'
which should be used by the container to setup its standard input/
output so the stages can transparently do i/o with the osbuild host
via stdio.
The input data (args) is written to a temp-file backed buffer. The
output is either the host's stdout directly or another temp-file
backed buffer; the latter is re-opened (via /proc/self/fd) to get
another file-descriptor for the container, so in theory the host
and the container could do i/o to the same buffer independently.
Expose the flags, address parameter of the underlying sock.sendmsg
method, in order to be able to explicitly specify the recipient of
the message; as needed in connection-less mode.
Python 3.2 renamed array.fromstring to array.frombytes, but kept
the former as an, now deprecated, alias. Use the canonical form
which indeed better describes what is going on.
In case osbuild is invoked without libdir parameter, the osbuild files
are not propagated into the buildroot container and therefore all
pipelines containing buildroot fail.
Example:
```
$ sudo osbuild --store /var/osbuild/ qcow2-pipeline.json
...
execv(/usr/lib/osbuild/osbuild-run) failed: No such file or directory
```
Unfortunately this is only the first error. Once you fix it, you realize
that also the symlink from "assemblers" directory is missing and
therefore you cannot import osbuild because it is not available anywhere
in the path. This is why I had to bind the osbuild module from host to
the build container.
If dir_fd wasn't passed, create_device() openend it to `/dev` and forgot
about closing it. To fix this, it would have to gain logic to only close
the fd if it wasn't passed in.
Side-step the problem by removing dir_fd, since nothing is using it
right now. We can add it back if something needs it.
Closing the socket is the responsibility of whoever opened it.
Fix this in the only user (qemu assembler) by using socket() in a `with`
block, which closes the socket on exit.
Storytime! I tried to run multiple osbuilds at once. It failed when
unmounting the buildtree. Weird. It turned out the buildtree was not
there anymore when osbuild tried to unmount it. But who unmounted it?
We need to deep dive into mount-types.
Nowadays, the / directory is shared-mounted by systemd. See:
https://serverfault.com/questions/868682/implications-of-mount-make-private
This has interesting implications, see the following example:
we start osbuild1 with /var/tmp/os1 as its store
osbuild1 creates /var/tmp/os1/tmp
osbuild1 bind-mounts / onto /var/tmp/os1/tmp
we start osbuild2 with /var/tmp/os2 as its store
osbuild2 creates /var/tmp/os2/tmp
osbuild2 bind-mounts / onto /var/tmp/os2/tmp
Now, the shared-mounting goes into effect:
The second mount-event gets propagated into the first mount, where it
creates another mount, so we get something like this:
/var/tmp/os1/tmp/var/tmp/os2/tmp
But this is just a start! Imagine running three osbuilds at once.
The event would get propagated to those 3 mounts created by two
osbuilds, creating 3 extra mounts, 7 in total.
It turns out this mounting strategy creates an *exponential number* of
mounts. Crazy, right?
This commit mounts the root inside build root using private bind, which
doesn't propagate bind-events. This solves the problem with the
exponential growth.
But the original problem was different, mount points were disappearing.
So how does this fix solve the problem?
Honestly, I don't know. Something with mount-event propagation is
probably responsible, but I cannot imagine how it is actually affecting
the unbinding.
Treat outputs like we treat trees: store them in the object store. This
simplifies using osbuild and allows returning a cached version if one is
available.
This makes the `--output` parameter redundant. Remove it.
`osbuild --json [ARGS]` will suppress the normal output and print its
result as JSON. For now, it only does this when it returns 0. Otherwise,
it prints the error from the latest stage.
This is useful for other tools to call it and get machine-readable
output.
Introduce and output id, which is the checksum over a full pipeline,
including all stages and the assembler. The id of a pipeline did not
include assemblers before. To be less confusing, rename the existing id
to "tree id".
In BuildRoot a new mount /var pointing to temporary directory in host's
/var/tmp is created. This enables us to have temporary storage inside
the container which is not hosted on tmpfs. Thanks to that we can move
larger files out of the part of filesystem which is hosted on tmpfs to
save up memory on machines with low memory capacity.
The best practice for creating a pipeline should be to include at least
one level of build-pipelines. This makes sure that the tools used to
generate the target image are well-defined.
In principle one could add several layers, though in pracite, one would
hope that the envinment used to build the buildroot does not affect the
final image (and as we anyway cannot recurr indefinitely, we fall back
to simply using the host system in this case).
This only makes sense, if the contents of the host system truly does not
affect the generated image, and as such we do not include any information
about the host when computing the hash that identifies a pipeline.
In fact, any image could be used in its place, as long as the required
tools are present. This commit takes advantage of that fact. Rather than
run a pipeline with the host as the build root, take a second pipeline
to generate the buildroot, but do not include this when computing the
pipeline id (so it is different from simply editing the original JSON).
This is necessary so we can use the same pipelines on significantly
different host systems (run with different --bulid-pipeline arguments).
In particular, it allows our test pipelines that generate f30 images
to be run unmodified on Travis (which runs Ubuntu).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Import modules between files using the syntax `from . import foobar`,
renaming what used to be `FooBar` to `foobar.FooBar` when moved to a
separate file.
In __init__.py only import what is meant to be public API.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make the order of argumnets in line with how it is used (and also
how it is conceptionally closer to the pipeline json document).
This makes no practical difference as the two arguments were both
just used for computing the hash.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Each pipeline is now self-contained without references to another.
However, as the final stage in a pipeline is saved to the content
store, we are able to reuse it if one pipeline is the prefix of
another, as described in the previous commit. This makes the
concept of a base redundant.
The ObjectStore must take a directory as argument, never None, so
the conditional assertion for this in Pipeline.run() is ok to
remove.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Don't do this only for the base, but for any prefix of the current
pipeline.
Note that if two pipelines share a prefix, but one is not the prefix
of another, no sharing is possible. Only a proper prefix can be
reused by another pipeline, as only the result of the last pipeline
is saved to the object store (this restriction could be changed in
the future).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Take this as an argumnet to __init__ in the same way that `base`
is.
This avoids us having to deal with the case of someone setting a
stage before the build, which does not work as the stage id will
be wrong.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Renaming a directory over an existing one is only an error if the
existing one is not empty, in which case ENOEMPTY is thrown.
Tested with:
>>> os.mkdir("foo")
>>> os.mkdir("bar")
>>> os.rename("foo", "bar")
# no error
>>> open("foo/a", "w").write("a")
1
>>> try: os.rename("bar", "foo")
... except OSError as e: e.errno == errno.ENOTEMPTY
...
True
The build pipeline, is a sub-pipeline used to generate the build
tree to use rather than the current root directory. This can be
nested arbitrarily deep, but ultimately we will fall back to the
current logic when no build property is found.
Just like the tree after the last stage of a regular pipeline ends
up in the object store, so does currently each build tree (as the
build sub-pipeline really is just a regular pipeline in its own
right). We may want to avoid both these instances of the implicit
storing semantics, and rather make it something the caller opts-in
to. However, for now that is left as a future optimization.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This also changes the structure of the object store, though the
basic idea is the same.
The object store contains a directory of objects, which are content
addressable filesystem trees. Currently we only ever use their
content-hash internally, but the idea for this is basically Lars
Karlitski and Kay Sievers' `treesum()`. We may exopse this in the
future.
Moreover, it contains a directory of refs, which are symlinks named
by the stage id they correspond to (as before), pointing to an object
generated from that stage-id.
The ObjectStore exposes three method:
`has_tree()`: This checks if the content store contains the given tree.
If so, we can rely on the tree remaining there.
`get_tree()`: This is meant to be used with a `with` block and yields
the path to a read-only instance of the tree with the given id. If the
tree_id is passed in as None, an empty directory is given instead.
`new_tree()`: This is meant to be used with a `with` block and yields
the path to a directory in which the tree by the given id should be
created. If a base_id is passed in, the tree is initialized with the
tree with the given id. Only when the block is exited successfully
is the tree written to the content store, referenced by the id in
question.
Use this in Pipeline.run() to avoid regenerating trees unneccessarily.
In order to trigger a regeneration, the content store must currently
be manually flushed.
Update the travis test to run the noop pipeline twice, verifying that
the stage is only run the first time.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than hard-coding this to /, let the caller provide the
directory path to use.
In the past, we needed to give special treatment to /, as it had
to be bind-mounted before being used by nspawn, to work around a
check they had, refusing to use the host root in the container.
We no longer pass the directory directly to nspawn, but rather
mount the subdirs we want ourselves, so that no longer applies.
The callers pass in /, so the behavior is unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We want the same functionality, but we now impleent it ourselves.
In addition to bind-mounting in /usr into the target container
(which is all nspawn does), we also add /bin, /sbin, /lib and
/lib64, if they exist and are not symlinks (presuambly into
/usr).
This means we can work on distros who have not implemented the
usr-move, like Ubuntu Bionic (used by Travis).
Signed-off-by: Tom Gundersen <teg@jklm.no>