Use the new Manifest.depsolve function to only build the pipelines that
were explicitly requested and their dependencies, taking into account
what is already present in the store.
Since now not all pipeline will be built, there wont be a result entry
for all the pipelines, thus the format version 2 result formatting was
changed to not require the pipeline to be present in result set.
New function that take a list of pipelines and return the list of
pipelines that need to be build, i.e. the pipelines and all their
dependencies that are not already present in the store.
Add corresponding test.
When building a version 1 manifest, the assembler would always be
exported, even when not requested via the `--export` command line
option. This was done for backwards compatibility so to not break
tools relying on that behavior. The problem is that support for
this uses a completely different code path and might also now be
confusing behavior. Thus remove the implicit and really only ever
export what was explicitly requested by the caller.
Since we bind `/proc` inside the container, we leak certain information
that comes with it. One of this is the kernel command line. None of the
decisions done by software running inside the container should depend
on the kernel command line on the host, so overwrite the kernel command
line by creating a temporary directory and mapping it inside the build-
root. For now we default to a simple `root=/dev/osbuild` fake kernel
command line.
Add a simple check for it as well.
Instead of having custom code that basically duplicates the
functionality of `LoopControl.loop_for_fd` use that instead.
Additionally, the version used in the test had a bug where
it did not re-create the Loop device in the main loop when
it was close due to an error, leading errors in subsequent
usages of the device that would often manifest in CI runs:
fcntl.ioctl(self.fd, self.LOOP_SET_FD, fd)
ValueError: file descriptor cannot be a negative integer (-1)
This stage takes /usr/lib/passwd and /usr/etc/passwd from an OSTree
checkout, merges them into one file, and store it as /etc/passwd in the
buildroot.
It does the same for /etc/group.
The reason for doing this is that there is an issue with unstable UIDs
and GIDs when creating OSTree commits from scratch. When there is a
package that creates a system user or a system group, it can change the
UID and GID of users and groups that are created later.
This is not a problem in traditional deployments because already created
users and groups never change their UIDs and GIDs, but with OSTree we
recreate the files from scratch and then replace the previous one so it
can actually change.
By copying the files to the build root before doing any other
operations, we can make sure that the UIDs and GIDs of already existing
users and groups won't change.
Co-author: Christian Kellner <christian@kellner.me>
Add a simple check that data written through the loop device is
actually ending up in the file. NB: this this will _fail_ if the
fd is cleared via `clear_fd` without the use of `flush_buf`. It
seems that the kernel (as of 5.13.8) will indeed not clear the
buffer cache of the loop device if the backing file is detached
via `LOOP_CLR_FD`. On the other hand, if the autoclear flag is,
i.e. the backing file cleared when the last file descriptor of
the loop device is closed, the buffer cached will be cleared as
part of the `release` operation of the block device.
Add support for locking the loopback block device via `flock(2)`.
The main use case for this is to prevent systemd-udevd from
proben the device while any modification is done to it. See the
systemd page, https://www.freedesktop.org/software/systemd, for
more details.
Add the corresponding tests to it.
Add a helper method that clears the fd for a given loop device but
also ensures that the loop device is not bound to the supplied fd
anymore. Check the function documentation for more information.
Add a corresponding test.
Add a `Loop.is_bound_to` helper that checks if the looback device is
bound if is so if the backing file refers to the same file as `fd`.
The latter is done by comparing the device and inode information.
Add a helper that will check if the loop devices is backed by
the file identified via the stat(2) result, i.e. the inode on
the correspoding device.
Add a correspoding test for the new helper.
Implement a `Loop.get_status` method, to get the properties of the
loop device, corresponding to LOOP_GET_STATUS64, and counterpart
to the existing `Loop.set_status` method. Use the new `get_status`
call in the `set_status` call, replacing the existing code that
does the same thing.
Add a basic test for the `get_status` method. Also fix an actual
leak, where the loop device was closed but the fd was not cleared
inside the test.
Validate source references while loading manifests so that a bad
reference would result in a meaningful error message instead of a
hard-to-understand Python exception.
Now that arguments are transmitted via a mapped, i.e. bind-mounted,
file instead of using the jsoncomm RPC mechanism, all the methods
related to the latter can be removed from API.
Add the ability to only read a sub-tree of a tree via `Object.read_at`.
Expose the functionality via the `Store{Server,Client}.read_tree_at`.
Extend the tests to check this new functionality.
Often, a message is being sent and followed by a call to `recv`
to wait for a reply. Create a simple helper `send_and_recv` that
does both in one method.
Add a simple check for that helper to the tests.
Add a new constructor method that allows creating a `Socket` from
an existing file-descriptor of a socket. This might be need when
the socket was passed to a child process.
Add a simple test for the new constructor method.
Add a new constructor method, `Socket.new_pair`, to create a pair
of connected sockets (via `socketpair`) and wrap both sides via
`jsoncomm.Socket`.
Add a simple test to check it.
The previous version covered too few use cases, more specifically a
single subscription. That is of course not the case for many hosts, so
osbuild needs to understand subscriptions.
When running org.osbuild.curl source, read the
/etc/yum.repos.d/redhat.repo file and load the system subscriptions from
there. While processing each url, guess which subscription is tied to
the url and use the CA certificate, client certificate, and client key
associated with this subscription. It must be done this way because the
depsolving and fetching of RPMs may be performed on different hosts and
the subscription credentials are different in such case.
More detailed description of why this approach was chosen is available
in osbuild-composer git: https://github.com/osbuild/osbuild-composer/pull/1405
Test that `checksum.verify_file` works correctly, which internally
uses the only other utility function `checksum.hexdigest_file`.
Check all algorithms currently supported by the `org.osbuild.curl`
source.
In the output test, check that for a successful pipeline run all
the elements are present: main result, assembler result, stage
result.
NB: Build result is hard to test because we would need to actually
build a valid build root.
The `org.osbuild.files` source provides files, but might in the
future not be the only one that does. Therefore rename it to
match the internal tool that is being used to fetch the files.
This is done for most other osbuild modules that target tools.
The format v1 loader is adapted to make this change transparent
for users of the v1 format, so we are backwards compatible.
Change the MPP depsolve preprocessor so that for format v2 based
manifest `org.osbuild.curl` source is used. Also rename the
corresponding source test. Adapt the format v2 mod test to use
the curl source.
Add a basic check to verify that loading and then describing the
pipeline results in the same description that was put in. This
test is esp. valuable because it checks the runner mapping and
name, id mappings.
Add a new test to check that validation works for the basic test
pipeline. This needs to be extended in the future to check that
invalid data is being caught properly, but it is a start.
Change the `ModuleInfo.schema` propertly into a `get_schema`
method call. This is in preparation to allow for different
schemata versions to be supported.
Commit d028ea5b16 introduced bug when introducing the `store`
argument to `Stage.run`, instead of passing `var=var`, i.e.
`var` is being passed as keyword argument, it is now being
passed as a positional one. Since the `path=/run/osbuild`
keyword argument comes before the `var=/var/tmp` argument,
`var` is now being passed as `path` instead of var.
Since `var` is always being passed in throughout the entire
codebase, make it a positional argument, and move it before
`path`.
Adapt the tests to pass `var` as positional argument.