Instead of just mocking the binary also write a log of the way
it got called so that tests can use this to check if the right
options are passed.
Note that the API should be improved here, instead of returning
a "naked" path to the calllog file there should be a class wrapping
it. And of course there should be tests.
The current `make_container()` helper is a bit silly (which is
entirely my fault). It requires a container tag as input but all
tests end up creating a random number for this input. So instead
just remove the input and return the container_id from the podman
build in the contextmanager and use that.
Based on the feedback from Tomáš in [0] this commit adds tests
that ensure consistent behavior between the python and the json
loader.
It's not 100% because the python is extremly leaniant and does
not even check if the required pieces of the json are there.
I.e. it will load a module without a SCHEMA or SCHEMA_2 variable
and the json loader code will warn about the issue but not
raise an error.
Fwiw, I have no strong opinion here but I do lean slightly towards
staying close to the original code (but both approaches of failing
with an exectption and continue with a warning have good arguments).
[0] https://github.com/osbuild/osbuild/pull/1618#discussion_r1521141148
Instead of always parsing the python stage to load meta information
allow the user of a new `{stage}-meta.json` file. This is a first
step towards allowing modules to be written in a different language
than python. It also has some practical advantages:
- slightly faster as it avoids calling python to output the schemas
- easier to write schemas as this can be done in a real json editor
now
- more extensible in a future where stages maybe binaries with
shlib dependencies that are only satisfied in the buildroot
but not on the host
This reverts commit 158acaac78.
With https://github.com/osbuild/bootc-image-builder/pull/238 the
original reason to call mknod goes away so we can just revert
it. osbuild now requires not only the loop device but also uses
`losetup --partscan` quite a lot now so the mknod approach becomes
impractical and the consumers of osbuild in a container should
just setup devtmpfs.
Since we support python3.6 we cannot assume that dicts are ordered
in any way. To ensure the `id` is still always valid we pass
sort_keys=True to json.dump().
Thanks to Simon!
Generate log messages with origin "org.osbuild.main" when
pipelines/stages start and finish. This way a higher level
frontend can display high level progress coming from this
origin and filter out e.g. stages based log messages (that
are usually quite technical as they are just stdout/stderr
from the stages).
Tweak the Progress class to be simpler. Given that progress does
not need to support arbitrary depth but only has a single level
the class now just exposes "sub_progress" to the caller.
When the main progress is advanced the sub_progress is now fully
deleted instead of just reset. The rational is that when the main
progress is done and advances a step it is very likely that a
new sub_progress is required and it's most likely an error if
the same sub_progress will get re-used.
This means that `reset()` can be removed as it's not used anymore
(and YAGNI). We can add it back when we have a use-case.
It also change the code so that "total" starts with 0 instead
of `None` (principle of least surprise). This means that now
`progress.incr()` is called in the JSONSeqMonitor() for
`finish()` and `result()` to indicate that the pipeline/stage
is finished.
This commit tweaks Context a bit so that any write will automatically
reset the `_id`. This ensures that we do not forget to reset `_id`
when the code changes.
It also tweaks the naming a bit, before there was a "setter" for
origin and functions to set "pipeline" and "stage". They are all
functions now with a "set_" prefix for symetry mostly.
The class LogLine() is purely used as a dataclass with no state
and the only function on it is `as_dict()`. This got refactored
into a new function `log_entry()` because there is no need for
this to be a class. The function that takes the same inputs.
This is a follow up to #1550 where we enabled a `rw` permissions mode,
which is not ideal since it would theoretically be possible to set both
`ro` and `rw` modes at the same time. This commit fixes the issue by only
allowing one option at a time.
Fixes#1588
User can now customize the systemd unit load path.
User can select between etc or usr , defaults to 'usr'.
Also user can customize the scope of the service between global
or system, defaults to system.
Signed-off-by: Sayan Paul <paul.sayan@gmail.com>
The `test_assembler.py` hardcods some filesystem and partition
UUIDs. This leads to hard to diagnose test failures when the
test is run in parallel. The btrfs and xfs filesystem drivers
will see the same uuid for multi created images and error sometimes with
someting like:
```
Mar 06 10:22:54 top kernel: BTRFS error: device /dev/loop104 belongs to fsid aff010e9-df95-4f81-be6b-e22317251033, and the fs is already mounted, scanned by mount (123856)
```
Its a race that only happens when two images are checked at the
same time.
This commit fixes the issue by just using a randomized UUID in
the test_assemblers.py. It also re-enables running the test in
parallel (which make it run a lot faster, from 34min to 14min).
The BLS specification [0] says the `options` field is optional and
can also appear multiple times. This commit tweaks the code to
deal with these corner cases and also adds tests that ensure that
this works correctly.
It also tweaks the file handling to be atomic.
[0] https://uapi-group.org/specifications/specs/boot_loader_specification/
- Process all necessary operations related to CoreOS
platforms is crucial and specific to CoreOS. This step
is essential for CoreOS exclusively.
- Our approach to handling 'platforms.json' may change as we
advance with the OSBuild work. However, we don't have a clear
vision about how it will be in the future yet, particularly as
we also manage similar components within the osbuild composer
to configure cloud parameters. We probably will know better
when we start working with the cloud artifacts.
As a summary, let's add it know to unblock us, and if we find a
better approach in the future, we can always go back and remove it.
Signed-off-by: Renata Ravanelli <rravanel@redhat.com>
A manifest (mpp and json) that uses the new source and input with the
skopeo stage.
This depends on the image we store at
./test/data/stages/skopeo/hello.img
The plan is to test this by pulling the hello.img into the host root
storage, build the manifest, delete the image from storage, and check
the tree.
Add systemd unit files in osbuild stage
This stage creates systemd unit file in `/usr/lib/systemd/system/`.
The stage accepts filename which must end with `.service`.Section
`Unit` , `Service` , `Install` accepts various parameters as per
the systemd documentaion.`systemd-analyze verify` is be performed
after the .service file is created to check for potential errors.
Signed-off-by: Sayan Paul <paul.sayan@gmail.com>
The new `testutil.mock_command` context manager can be used to
mock commands in PATH and replace them with arbitrary shell
scripts. This is useful in testing to e.g. simulate exact error
conditions that would be hard to trigger otherwise or to replace
long running commands with faked results.
Example:
```
fake_cmd = textwrap.dedent("""\
do-something
""")
with mock_command("some-cmd", fake_cmd):
your_code
```
This commit adds example manifests for a bootc.install-to-filesystem
system. It does not do more with them because running a full test
requires a working podman which is difficult to use inside our
GH runners that are already running inside docker.
This adds a `default: true` option for all cases where OSTree
information is specified in schemas and allows for the information
to be picked up from the filesystem.
This is a safe operation because when building disk images there is
no known case where having two deployments makes sense. In the case
there ever were a case then the osname, ref, and serial options still
exist and can be used.
Co-authored-by: Luke Yang <luyang@redhat.com>
Co-authored-by: Michael Vogt <michael.vogt@gmail.com>
Since the stage depends on quite a specific tree state (ostree prepped
tree with boot files), we can't really unit test it any simpler than
generating a tree with and without running the stage and diffing the
tree.
We only support `gpt` here so it would seem this option doesn't
make much sense to add, but it will make it so that the mpp-define-images
from osbuild-mpp can be passed in to `org.osbuild.sgdisk` just as it
can be passed in today to `org.osbuild.sfdisk`.
Partitions by default are indexed starting at 1, but in
some cases, such as CoreOS for IBM Z, it may be usefull
to set the 'partnum' for GPT disks explicitly, without
creating dummy partitions.
Now user can define an image:
```
mpp-define-images:
- id: image
size: 10737418240
table:
uuid: 00000000-0000-4000-a000-000000000001
label: gpt
partitions:
- name: boot
type: 0FC63DAF-8483-4772-8E79-3D69D8477DE4
partnum: 3
size: 786432
- name: root
type: 0FC63DAF-8483-4772-8E79-3D69D8477DE4
partnum: 4
size: 4194304
```
So target disk would look like:
```
Disklabel type: gpt
Disk identifier: 00000000-0000-4000-A000-000000000001
Device Start End Sectors Size Type
/dev/loop0p3 2048 788479 786432 384M Linux filesystem
/dev/loop0p4 788480 4982783 4194304 2G Linux filesystem
```
This patch updates the osbuild-mpp tool and the sgdisk and sfdisk
stages to support this.
Co-authored-by: Dusty Mabe <dusty@dustymabe.com>
This commit adds code that will remove the least recently used
entries when a store() operation does not succeeds because the
cache is full. To be more efficient it will try to free
twice the requested size (this can be configured in the code).
Add or remove the immutable bit to the specified mount directory.
The need we have for this right now is for the CoreOS builds where
the immutable bit being set on an OSTree deployment root doesn't
survive the `cp -a --reflink=auto` in the org.osbuild.copy stage when
being copied from the directory tree into the mounted XFS filesystem
we created on the disk image. Thus we have to workaround this loss
of attribute by applying the attribute directly on the mounted
filesystem from the disk.