2 configurations for the listeners are now possible:
- enableJWT=false with client ssl auth
- enableJWT=true with https
Actual verification of the tokens is handled by
https://github.com/openshift-online/ocm-sdk-go.
An authentication handler is run as the top level handler, before any
routing is done. Routes which do not require authentication should be
listed as exceptions.
Authentication can be restricted using an ACL file which allows
filtering based on JWT claims. For more information see the inline
comments in ocm-sdk/authentication.
As an added quirk the `-v` flag for the osbuild-composer executable was
changed to `-verbose` to avoid flag collision with glog which declares
the `-v` flag in the package `init()` function. The ocm-sdk depends on
glog and pulls it in.
The problem: osbuild-composer used to have a rather uncomplete logic for
selecting client certificates and keys while fetching data from
repositories that use the "subscription model". In this scenario, every
repo requires the user to use a client-side TLS certificate. The problem
is that every repo can use its own CA and require a different pair of
a certificate and a key. This case wasn't handled at all in composer.
Furthermore, osbuild-composer can use remote workers which complicates
things even more.
Assumptions: The problem outlined above is hard to solve in the general
case, but Red Hat Subscription Manager places certain limitations on how
subscriptions might be used. For example, a subscription must be tight to
a host system, so there is no way to use such a repository in osbuild-composer
without it being available on the host system as well.
Also, if a user wishes to use a certain repository in osbuild-composer it
must be available on both hosts: the composer and the worker. It will come
with different pair of a client certificate and a key but otherwise, its
configuration remains the same.
The solution: Expect all the subscriptions to be registered in the
/etc/yum.repos.d/redhat.repo file. Read the mapping of URLs to certificates
and keys from there and use it. Don't change the manifest format and let
osbuild guess the appropriate subscription to use.
The oldest distributions that we now support are RHEL 8.4 and Fedora 33.
They both support go 1.15, let's bump.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This commit adds and implements org.osbuild.azure.image target.
Let's talk about the already implemented org.osbuild.azure target firstly:
The purpose of this target is to authenticate using the Azure Storage
credentials and upload the image file as a Page Blob. Page Blob is basically
an object in storage and it cannot be directly used to launch a VM. To achieve
that, you need to define an actual Azure Image with the Page Blob attached.
For the cloud API, we would like to create an actual Azure Image that is
immediately available for new VMs. The new target accomplishes it.
To achieve this, it must use a different authentication method: Azure OAuth.
The other important difference is that currently, the credentials are stored
on the worker and not in target options. This should lead to better security
because we don't send the credentials over network. In the future, we would
like to have credential-less setup using workers in Azure with the right
IAM policies applied but this requires more investigation and is not
implemented in this commit.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add new internal upload target for Google Cloud Platform and
osbuild-upload-gcp CLI tool which uses the API.
Supported features are:
- Authenticate with GCP using explicitly provided JSON credentials
file or let the authentication be handled automatically by the
Google cloud client library. The later is useful e.g. when the worker
is running in GCP VM instance, which has associated permissions with
it.
- Upload an existing image file into existing Storage bucket.
- Verify MD5 checksum of the uploaded image file against the local
file's checksum.
- Import the uploaded image file into Compute Node as an Image.
- Delete the uploaded image file after a successful image import.
- Delete all cache files from storage created as part of the image
import build job.
- Share the imported image with a list of specified accounts.
GCP-specific image type is not yet added, since GCP supports importing
VMDK and VHD images, which the osbuild-composer already supports.
Update go.mod, vendor/ content and SPEC file with new dependencies.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Due to https://github.com/Azure/azure-storage-blob-go/issues/236 , we had to
use a weird version of this library (see 1b051922).
A new release came out yesterday that's tagged correctly so let's use it
so we can remove the hack from go.mod.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The oldest distros we support are Fedora 32 and RHEL 8.3. As both have
Go 1.14, we're safe to upgrade.
Also, I had to change prepare-source.sh because go fmt now refuses to run on
a project which has issues in go.mod, go.sum or modules.text. I think this
should be a harmless change.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Fedora 33 and rawhide got an updated version of the azblob library. Sadly, it
introduced a non-compatible API change. This commit does the same thing as
a67baf5a did for kolo/xmlrpc:
We now have two wrappers around the affected part of the API. Fedora 32 uses
the wrapper around the old API, whereas Fedora 33 and 34 (and RHEL with its
vendored deps) use the wrapper around the new API. The switch is implemented
using go build flags and spec file magic.
See a67baf5a for more thoughts.
Also, there's v0.11.1-0.20201209121048-6df5d9af221d in go.mod, why?
The maintainers of azblob probably tagged a wrong commit with v0.12.0 which
breaks go. The long v0.11.1-.* version is basically the proper v0.12.0 commit.
See https://github.com/Azure/azure-storage-blob-go/issues/236 for more
information.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Fedora 33 ships the new API so let's do the switch now.
But... this would break older Fedoras because they only have the old API,
right?
We have the following options:
1) Ship xmlrpc compat package to Fedora 33+. This would mean that we delay the API switch till F32 EOL. This would be the most elegant solution, yet it has two issues: a) We will surely not be able to deliver the compat package before F33 Final Freeze. b) It's an extra and annoying work.
2) Downstream patch. No.
3) Use build constraints and have two versions of our code for both different
API.
I chose solution #3. It has an issue though:
%gobuild macro already passes -tags argument to go build. Therefore the
following line fails because it's not possible to use -tags more than once:
%gobuild -tags kolo_xmlrpc_oldapi ...
Therefore I had to come up with manual tinkering with the build constraints
in the spec file. This is pretty ugly but I like that:
1) Go code is actually clean, no weird magic is happening there.
2) We can still ship our software to Fedora/RHEL as we used to
(no downstream patches)
3) All downstreams can use the upstream spec file directly.
Note that this doesn't affect RHEL in any way as it uses vendored libraries.
Follow the worker API so we standardise on one library. This simplifies
the code quite a bit.
No functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The API of kolo/xmlrpc changed after the commit that is shipped in
Fedora. Pin the vendored version to that and adjust the API usage.
This should make the RPM compile in both RHEL and Fedora.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This just translates between the OpenAPI spec and our internal
API.
This still lacks tests, but a follow-up commit adds integration tests.
`internal/kojiapi/openapi.gen.go` was automatically generated from
`internal/kojiapi/openapi.yml`. To regenerate use `go generate ./...`.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Write an openapi spec for the worker API and use `deepmap/oapi-codegen`
to generate scaffolding for the server-side using the `labstack/echo`
server.
Incidentally, echo by default returns the errors in the same format that
worker API always has:
{ "message": "..." }
The API itself is unchanged to make this change easier to understand. It
will be changed to better suit our needs in future commits.
Prior this commit we only had support for username/password authentication
in the koji integration. This wasn't particularly useful because this
auth type isn't used in any production instance.
This commit adds the support for GSSAPI/Kerberos authentication.
The implementation uses kerby library which is very lightweight wrapper
around C gssapi library.
Also, the koji unit test and the run-koji-container script were modified
so the GSSAPI auth is fully tested.
All our downstream platforms now support Go 1.13:
RHEL 8.2: golang-1.13.4
Fedora 31: golang-1.13.14
There's no reason anymore to stay on 1.12, therefore this commit bumps
the minimal required Go version to 1.13
This does not yet actually upload the image, and it only supports empty
images. You need to place a an empty file named <filename>, with a valid
extension (e.g., .qcow2) in /mnt/koji/work/<directory>/.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This commit makes the osbuild-image-tests binary doing the same set of tests
like the old test/run script.
Changes from test/run:
- qemu/nspawn are now killed gracefully. Firstly, SIGTERM is sent.
If the process doesn't exit till the timeout, SIGKILL is sent.
I changed this because nspawn leaves some artifacts behind when killed
by SIGKILL.
- the unsharing of network namespace now works differently because of
systemd issue #15079
This changes osbuild-composer's behavior to match lorax-composer when
encountering invalid versions. Instead of leaving them as-is it will
return a BlueprintError explaining the problem. eg.
"errors": [
{
"id": "BlueprintsError",
"msg": "Invalid 'version', must use Semantic Versioning: is not in dotted-tri format"
}
]
This is enforced on new blueprints (including the workspace). If a
previously stored blueprint has an invalid version and a new one is
pushed it will use the new version number instead of trying to bump the
invalid one.
This also moves the version bump logic into blueprint instead of store,
and adds an Initialize function that will make sure that the blueprint
has sane default values for any missing fields.
This includes tests for the Initialize and BumpVersion functions.
RHEL requires the source code for dependencies to be included in the
srpm. The spec file already expects that, but we've only included the
vendored modules (i.e., the `vendor` directory) in the `rhel-8.2.`
branch. Move vendoring to master, so that we can build RHEL packages
from it as well.
This commit is the result of running `go mod vendor`, which includes the
vendored sources and updates go.mod and go.sum files.
Fedora requires the opposite: dependencies should not be vendored. The
spec file already ignores the `vendor` directory by default.
This commandline tools uploads a file to S3, as a proof of concept.
All options are mandatory. Credentials are only read from the
commandline and not from the environment or configuration files.
The next step is to add support for importing from S3 to EC2,
currently the images we produce cannot be imported as-is, so this
requires more research.
To try this out: create an S3 bucket, get your credentials and
call the tool, passing any value as `key`. Note that if the key
already exists, it will be overwritten.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It uses Azure SDK to connect to Azure storage, creates a container there
and uploads the image. Unfortunately the API for page blobs does not
include some thread pool for upload so I implemented one myself. The
performance can be tweaked using the upload chunk size and number of
parallel threads.
The package is prepared to be refactored into common module within
internals package as soon as we agree on the of these common packages
for image upload.
Add azure-blob-storage rpm package as a dependency
It didn't work for me using the `golang(package)` syntax. Using the
package name explicitly works.
These endpoint are similar in many ways, therefore just one commit. Their
functionality is basically same as in lorax except for error messages and
weird edge cases when handling trailing slashes.
closes#64, closes#65
osbuild-composer now uses socket activation instead of hardcoded paths
in the code. osbuild-worker is an http client therefore it uses only
service unit. osbuild-worker must be started after the socket is
created. osbuild-composer service requires osbuild-worker to run, because without
it no jobs can be started.
osbuild-composer is executed as a regular user (newly created
_osbuild-composer user) as opposed to the worker which must run as root
in order to execute osbuild itself