s3cmd sync actually downloads metadata for all objects in a s3 bucket.
We have built a lot of RPMs, thus this takes 5 minutes on AWS and 25 minutes
on my laptop (!!!).
Let's use recursive put instead. This doesn't delete any files on the remote
side. As we upload RPMs only once, this also shouldn't fail on "the
object already exists". Using this method, we should be able to upload the
RPMs in seconds.
The same patch was applied in osbuild-composer cf73edd2
Extend the `org.osbuild.rhsm` stage to configure selected options in the
subscription-manager configuration (in `/etc/rhsm/rhsm.conf`). It is
possible to set only values currently set in RHEL AMI images,
specifically:
- `manage_repos` option in `rhsm` section
- `auto_registration` option in `rhsmcertd` section
Ensure that the stage does not "touch" any configuration files, unless
it actually changes them. This prevents changing the file modification
time.
Update the `org.osbuild.rhsm` stage test case to set the additional
configuration options.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Use `patternProperties` instead of `propertyNames` and `pattern`,
which is not in draft 4 and so did not work (but also did not
throw an error).
Co-Developed-by: Achilleas Koutsou <achilleas@koutsou.net>
Add a new manifest that creates an ostree commit, deploys that,
creates a raw image and copies the deployment into it. The
resulting artefact is a bootlabel qcow2 image.
The stage was converted to use inputs, but its schema was not, which
means that although the stage requires inputs, they could not be
specified. Doh. Change the expected input to `commit`.
NB: This stage should be broken up, so *SHOULD NOT* be used in newly
created pipelines.
Fix a small whitespace change as well.
Based on that part of the qemu assembler that converts the raw image
into different virtualization formats, like qcow2 and such. Supports
all the formats the old qemu assembler also supported.
Currently the `org.osbuild.files` input only supports the source origin.
Extend support to mapping files from pipelines, using the recently added
sub-tree reading capability of `ObjectStore.reat_at`. Restructure the
JSON schema to keep is as readable as possible.
Add the ability to only read a sub-tree of a tree via `Object.read_at`.
Expose the functionality via the `Store{Server,Client}.read_tree_at`.
Extend the tests to check this new functionality.
The `org.osbuild.files` input provides individual files to a stage.
Change the `refs` key in the returned dict to `files` to better
reflect that fact. Also adapt the documentation to indicate that
the keys actually paths and not necessarily checksums. This prepares
for future extension of the `files` input to pipeline origins.
This stage is the part of the qemu assembler that generates and
installs the grub2 core image on non-uefi or hybrid systems,
like x86 legacy and PPC64LE (Open Firmware).
This sage can be used to copy items, such as files or trees, from one
location to another. The only supported location for reading currently
is currently `input`. Supported locations for writing are `mount` and
`tree`.
Allows stages to access file systems provided by devices.
This makes mount handling transparent to the stages, i.e.
the individual stages do not need any code for different
file system types and the underlying devices.
Device service that provides support for bind files within the tree
to loopback devices. Valid parameters are the `filename`, `offset`
and `size`. This controls what part of the file to bind to the loop
device. The unit for `size` and `offset` is sectors and the sector
size can be configured via the `sector-size` parameter. The reason
behind the sector unit is so that numbers can easily be compared
with those specified in the partition table.
A new host service that provides device functionality to stages.
Since stages run in a container and are restricted from creating
device nodes, all device handling is done in the main osbuild
process. Currently this is done with the help of APIs and RPC,
e.g. `LoopServer`. Device host services on the other hand allow
declaring devices in the manifest itself and then osbuild will
prepare all devices before running the stage. One desired effect
is that it makes device handling transparent to the stages, e.g.
they don't have to know about loopback devices, LVM or LUKS.
Another result is that specific device handling is now modular
like Inputs and Source are and thus moved out of osbuild itself.
Create a `InputService` class with an abstract method called `map`,
meant to be implemented by all inputs. An `unmap` method may be
optionally overridden by inputs to cleanup resources.
Instantiate a `host.ServiceManager` in the `Stage.run` section and
pass the to the host side input code so it can be used to spawn the
input services.
Convert all existing inputs to the new service framework.
Instead of bind-mounting each individual input into the container,
create a temporary directory that is used by all inputs and bind-
mount this to the well known location ("/run/osbuild/inputs"). The
temporary directory is then passed to the input so that it can
make the requested resources available relative to that directory.
This is enforced by the common input handling code.
Additionally, pass the well known input path via a new "paths" key
to the arguments dictionary passed to the stage.
This helper property is misleading since it is not the name of the
input in the context of the manifest, but actually "type". Name is
a left-over from the nomenclature of format v1, where the type of
stages and inputs was called `name`.
Resolve relative paths for items the `api.arguments` call: Since paths
are different on the host and in the container, they can be transmitted
relative. Resolve the items for all groups that have paths registered.
Instead if using `check=True` for `subprocess.run`, which turns
a process failure (i.e. non-zero return codes) into generic a
`CalledProcessError` exception, use `check=False` and explicitly
handle mount errors, translating them into a `RuntimeError` with
a better error message.
Implement a new `read_at` method that will bind mount the tree of the
object to a specified location, instead of a temporary directory as
it done in the `read` method. Implement the latter via `read_at`.
Implement the corresponding methods for `Store{Client,Server}`. Since
the `ObjectStore.read_at` method will fail if the target directory
does not exist (or is of the wrong type), catch any exceptions in
the `StoreServer` and send those to the `StoreClient` via an `error`
entry.
This one is for David: also fix a missing blank line.
This disables buffering for the standard output stream for python
executables spawn within the build root. This should help with
the ordering of text output in stages: when stdout is buffered,
debug messages via `print` will be end up in that buffer. When
executables are run in the stage, via `subprocess.run` their
stdout has its own buffering, which will be flushed at the end
of the run. If stdout was not manually flushed before invoking
the executable, the output of the tool will be emitted before
anything in the buffer. For example:
print("stage")
subprocess.run(["echo", "tool"])
Will lead have the following ordering:
"tool"
"stage"
To avoid this, without having to manually flush the stdout
buffer before every `subprocess.run`, disable buffering for
python binaries run inside the build root.
Since low-level primitives (os.read) are used to read from the stdout
pipe, manual text decoding was necessary there anyway. The `encoding`
argument meant that we could forgo the manual decoding for the call
to `communicate`. But this meant that text handling is not uniform.
Therefore, remove the `encoding` argument from the `Popen` call and
manual decode all the text.
Host services are a way to provide functionality to stages that is
restricted to the host and not directly available in the container,
such as providing input to stages, devices access and mounting.
This commit introduces a `ServiceManager` class that can be used to
start and (automatically) stop host service, as well as a `Service`
base class together with a `ServiceClient` class that be used to
implement host services and communicate with them. Refer to the doc
string of the module for more information.
Often, a message is being sent and followed by a call to `recv`
to wait for a reply. Create a simple helper `send_and_recv` that
does both in one method.
Add a simple check for that helper to the tests.