- Enhanced APT stage with advanced features:
- Package version pinning and holds
- Custom repository priorities
- Specific version installation
- Updated schemas for all new options
- New dependency resolution stage (org.osbuild.apt.depsolve):
- Advanced dependency solving with conflict resolution
- Multiple strategies (conservative, aggressive, resolve)
- Package optimization and dry-run support
- New Docker/OCI image building stage (org.osbuild.docker):
- Docker and OCI container image creation
- Flexible configuration for entrypoints, commands, env vars
- Image export and multi-format support
- New cloud image generation stage (org.osbuild.cloud):
- Multi-cloud support (AWS, GCP, Azure, OpenStack, DigitalOcean)
- Cloud-init integration and provider-specific metadata
- Live ISO and network boot image creation
- New debug and developer tools stage (org.osbuild.debug):
- Debug logging and manifest validation
- Performance profiling and dependency tracing
- Comprehensive debug reports
- Example manifests for all new features:
- debian-advanced-apt.json - Advanced APT features
- debian-docker-container.json - Container image building
- debian-aws-image.json - AWS cloud image
- debian-live-iso.json - Live ISO creation
- debian-debug-build.json - Debug mode
- Updated .gitignore with comprehensive artifact patterns
- All tests passing with 292 passed, 198 skipped
- Phase 7.3 marked as completed in todo.txt
debian-forge is now production-ready with advanced features! 🎉
Using a metalink or mirrorlist along with the package paths and
checksums allows them to be reliably downloaded even when mirrors are
not all in sync. It will retry with a new mirror until it succeeds, or
has tried all of the mirrors.
The full list of packages is also listed in terraform
containers/blob/main/docker-bake.hcl#L240 ("BASE_PACKAGES")
so this README and the package list should somewhat stay in sync
I am keeping mailing list link, however, nobody was able to tell me how one can subscribe to it. I think it is Google Groups list now and there is no join option.
osbuild-composer project has build related section in the main README.
For the osbuild it was missing. Even if this is script-based project
external developers may not be aware of this just by looking at README.
Missing build related section has been added for consistency.
This adds a stage called org.osbuild.skopeo that installs docker and
oci archive files into the container storage of the tree being
constructed.
The source can either be a file from another pipeline, for example one
created with the existing org.osbuild.oci-archive stage, or it can
be using the new org.osbuild.skopeo source and org.osbuild.containers
input, which will download an image from a registry and install that.
There is an optional option in the install stage that lets you
configure a custom storage location, which allows the use of the
additionalimagestores option in the container storage.conf
to use a read-only image stores (instead of /var/lib/container).
Note: skopeo fails to start if /etc/containers/policy.json is
not available, so we bind mount it from the build tree to the
buildroot if available.
This commit changes our release process from the model of having a
release commit (and pull request) which also updated the NEWS.md file
and bumped the versions in the osbuild.spec and setup.py files to simply
pushing a tag.
After the tag (containing the release notes) is pushed, a GitHub
composite action is triggered that creates a GitHub release with the
contents of the git release tag. Furthermore the bumping of the version
number now always has to happen directly after a release to avoid having
to push a(n untested) commit to main for the release and this is also
handled by the GitHub composite action.
Finally packit pushes directly to dist-git now on pushing the release
tag, so no pull-request needs to be reviewed and merged anymore.
The current Build section is misleading, since running the commands that
are mentioned there will lead to only the osbuild module and osbuild-mpp
tool to be installed.
But none of the other required artifacts (sources, stages, schemas, etc)
will, causing the installed osbuild to not work at all. Instead, have an
Install section that explains how osbuild can be installed from RPMs.
This swaps the `systemd-nspawn` implementation for `bubblewrap` to
contain sub-processes. It also adjusts the `BuildRoot` implementation
to reduce the number of mounts required to keep locally.
This has the following advantages:
* We know exactly how the build-root looks like. Only the bits and
pieces we select will end up in the build-root. We can let RPM
authors know what environment their post-install scripts need to
run in, and we can reliably test this.
* We no longer need any D-Bus access or access to other PID1
facilities. Bubblewrap allows us to execute from any environment,
including containers and sandboxes.
* Bubblewrap setup is significantly faster than nspawn. This is a
minor point though, since nspawn is still fast enough compared to
the operations we perform in the container.
* Bubblewrap does not require root.
At the same time, we have a bunch of downsides which might increase the
workload in the future:
* We now control the build-root, which also means we have to make sure
it works on all our supported architectures, all quirks are
included, and all required resources are accessible from within the
build-root.
The good thing here is that we have lots of previous-art we can
follow, and all the other ones just play whack-a-mole, so we can
join that fun.
The `bubblewrap` project is used by podman and flatpak, it is packaged
for all major distributions, and looks like a stable dependency.
With the introduction of man-pages, this now moves the usage
information, examples, and pipeline descriptions into the man-page. With
this out of the way, the README is restructured with information on how
to install the project, what dependencies exist, and directions on
where to find further information or contact upstream.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Update the command line options help text as well as the sample
command line to build the `base-qcow2.json` to include the new
sources command line option.
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
Downloading the gpg key is fragile and kept causing our tests to fail.
In general, we want to limit the network access, so let's just embed
the gpg keys directly in the pipeline.
Fixes#133.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Otherwise, sfdik would pick one at random. We want our images to be
reproducible to the extent possible, so we must move all randomness
out of the assemblers when we can.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Opt in to supporting the most common ones, if we want to support more
we can add support as the need arises.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Treat outputs like we treat trees: store them in the object store. This
simplifies using osbuild and allows returning a cached version if one is
available.
This makes the `--output` parameter redundant. Remove it.
Require "checksum" option for each repository, which contains the
checksum of the `repodata/repomd.xml` file. This file (indirectly)
contains checksums for all packages.
Verify that the metadata dnf downloaded to install packages matches that
checksum. This way, this stage will give an error when a reposiory
changed between putting together the pipeline and running it.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as dnf repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
We've been effectively using the basearch of the host, making the stage
non-reproducible: if the same pipeline was run on machines with
different architectures, it would produce different results. However,
pipelines producing different outputs must be different. Thus, this
patch includes the basearch in the pipeline.
In principle, this allows cross-arch builds. dnf should be the only
stage running binaries from the target tree. This is not yet tested.
Don't try to guess how much room the filesystem will take up. In
practice, most people will want to specify a size anyway, depending on
their use case.
As is typical for osbuild, there are no convenience features for the
pipeline (it's not meant to be written manually). `size` must be given
in bytes and it must be a multiple of 512.
This also changes the structure of the object store, though the
basic idea is the same.
The object store contains a directory of objects, which are content
addressable filesystem trees. Currently we only ever use their
content-hash internally, but the idea for this is basically Lars
Karlitski and Kay Sievers' `treesum()`. We may exopse this in the
future.
Moreover, it contains a directory of refs, which are symlinks named
by the stage id they correspond to (as before), pointing to an object
generated from that stage-id.
The ObjectStore exposes three method:
`has_tree()`: This checks if the content store contains the given tree.
If so, we can rely on the tree remaining there.
`get_tree()`: This is meant to be used with a `with` block and yields
the path to a read-only instance of the tree with the given id. If the
tree_id is passed in as None, an empty directory is given instead.
`new_tree()`: This is meant to be used with a `with` block and yields
the path to a directory in which the tree by the given id should be
created. If a base_id is passed in, the tree is initialized with the
tree with the given id. Only when the block is exited successfully
is the tree written to the content store, referenced by the id in
question.
Use this in Pipeline.run() to avoid regenerating trees unneccessarily.
In order to trigger a regeneration, the content store must currently
be manually flushed.
Update the travis test to run the noop pipeline twice, verifying that
the stage is only run the first time.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The new arguments are passed to the first, respectively last, stage
and are both directories. --input is read only and can be used to
initialize the first stage. --output is r/w and is where the final
stage should place the produced image.
Signed-off-by: Tom Gundersen <teg@jklm.no>