packit is a service for continous delivery into Fedora repositories. It
should help us synchronize upstream repository on Github with downstream
repository on src.fedoraproject.org.
Travis uses Ubuntu, which does not ship dnf, so introduce a yum
stage that allows us to test actual generation of trees on Travis.
We use this to generate a tree containing the tools necessary to
create abritrary Fedora-based build images in the future. We base
this on Fedora 27, as that is the last version that is installable
using yum rather than dnf.
In the future, once we support pipelines with nested build-images,
rather than just using the host OS as the build image, this will
allow us to bootstrap arbitrary pipelines on Travis.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We want the same functionality, but we now impleent it ourselves.
In addition to bind-mounting in /usr into the target container
(which is all nspawn does), we also add /bin, /sbin, /lib and
/lib64, if they exist and are not symlinks (presuambly into
/usr).
This means we can work on distros who have not implemented the
usr-move, like Ubuntu Bionic (used by Travis).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Call update-ca-certificates if the binary is found, generating SSL
certificates in /etc in i similar way on Debian-based systems as
is being done on RedHat-based ones.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is a RHism, that is not available on Debian-based systems.
Do not make it a hard reqirement, as pipelines may be able to
function just fine without it.
In a follow-up commit we will also check for the Debian-based
equivalent.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Let's always use the latest available Ubuntu release for our CI, we
are interested in potentially building old images, and using old
images as bulid images, but having an old distro as host is not
necessarily an aim. If we want to test with a greater diversity of
distros (which we do), we should do that in VM's, this should just
be for the simple/quick case.
Also restructure a bit to allow for more (named) tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The underlying filesystem was mounted in __init__ and unmonuted in
__exit__/__del__. This meant that if the same object was reused in
several `with` clauses, only the first one would work as intended.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Support the LOOP_SET_DIRECT_IO ioctl, which alows us to control
whether or not a loopback device should perform its own buffering
or rely on the one done by the underlying backing file.
Enabling this should improve both throughput and memory consumption,
it is not currently hooked up as more testing would be required.
This way the assemblers/stages are valid in isolation, even without
osbulid installed system-wide. This would be needed to have this work
when --libdir is not the system-wide one, as the library would
otherwise not be in sys.path.
Signed-off-by: Tom Gundersen <teg@jklm.no>
RPM packages are now kept in output directory after build so that we
know exactly which packages to copy to the test. The test directory now
contains special directory for RPMs. Fedora developer portal is
referenced from README file.
Stop guessing if we're in the source directory by looking if a `stages`
subdirectory exists. Instead, assume that osbuild is installed on the
host.
If `--libdir` is given, mount the libdir into `/run/osbuild/lib` (alas,
we can't overwrite `/usr/libexec/osbuild`) and run osbuild from there.
Thus, running from source must now be done like this:
# python3 -m osbuild --libdir . [other args]
The repository now contains a Vagrantfile for running the testing script
against an RPM package created locally using `make rpm`. To run this
test use `make vagrant-test`. setup.py was also modified to adhere to
packaging guidelines and not to install system-level executables.
The lincense is now included in the Python package using the MANIFEST.in
file.
This really only makes sense if we are running systemd as PID1
inside the container, but we are not booting a system, just using
it as a glorified chroot.
This means entering the namespaces from the outside will be a bit
more cumbersome, but that was not used much and was never reliable
to begin with.
Signed-off-by: Tom Gundersen <teg@jklm.no>
loop.py is a simple wrapper around the kernel loop API. remoteloop.py
uses this to create a server/clinet pair that communicates over an
AF_UNIX/SOCK_DGRAM socket to allow the server to create loop devices
for the client.
The client passes a fd that should be bound to the resulting loop
device, and a dir-fd where the loop device node should be created.
The server returns the name of the device node to the client.
The idea is that the client is run from whithin a container without
access to devtmpfs (and hence /dev/loop-control), and the server
runs on the host. The client would typically pass its (fake) /dev
as the output directory.
For the client this will be similar to `losetup -f foo.img --show`.
[@larskarlitski: pylint: ignore the new LoopInfo class, because it
only has dynamic attributes. Also disable attribute-defined-outside-init,
which (among other problems) is not ignored for that class.]
Signed-off-by: Tom Gundersen <teg@jklm.no>
Add a directory to each BuildRoot potentially containing a set of
sockets. Also add a helper to create a named bound socket in a given
BuildRoot.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Create a loopback device for the raw partiton, rather than relying on
the partition devices the kernel puts in /dev. This requires us to
specify the part_msdos module directly as grub2-install now seems
unable to detect the partition table type.
Signed-off-by: Tom Gundersen <teg@jklm.no>