Remove all the internal package that are now in the
github.com/osbuild/images package and vendor it.
A new function in internal/blueprint/ converts from an osbuild-composer
blueprint to an images blueprint. This is necessary for keeping the
blueprint implementation in both packages. In the future, the images
package will change the blueprint (and most likely rename it) and it
will only be part of the osbuild-composer internals and interface. The
Convert() function will be responsible for converting the blueprint into
the new configuration object.
This adds a function, CleanupOldCacheDirs, that checks the dirs under
/var/cache/osbuild-composer/rpmmd/ and removes files and directories
that don't match the current list of supported distros.
This will clean up the cache from old releases as the are retired, and
will also cleanup the old top level cache directory structure after an
upgrade.
NOTE: This function does not return errors, any real problems it
encounters will also be caught by the cache initialization code and
handled there.
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:
ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp
All of the above are a simple name change, the function arguments and
results are exactly the same as before.
ioutil.ReadDir -> os.ReadDir
now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.
These were identified by running:
golangci-lint run --build-tags=integration ./...
This satisfies the linter complaint about potential Slowloris attack
where headers are read slowly in an attempt to DoS the server.
The uses of ListenAndServe are only for testing purposes and are not run
in the production server so ignore the lint errors in
osbuild-mock-openid-provider.
For each of the supported distros start a goroutine to depsolve
'filesystem' which will preload the metadata making subsequent responses
faster.
This is safe to do without limits because we only supposed a limited
number of distros, and without additional locking because this is the
the same as hitting the API with multiple depsolve requests at the same
time.
We no longer use it, let's remove it. If you are wondering what to use instead,
use Cloud API. It supports everything that Koji API supported and more.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
When composer exits, it doesn't wait for the manifest generation goroutines
to finish. This is generally a bad practice so let's introduce a bit of
syncing and a new Shutdown method to prevent this.
This also prevents the manifest generation goroutine from creating weird
states when interrupted on a random line of code.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Call `Shutdown()` on all http servers. This means we will finish processing
any pending requests (including depsolving), but we will not listen to new
ones.
In particular, we will not answer to the readiness probe, so no new traffic
will be routed to this container.
Once all pending requests have been handled composer will shut down
gracefully and the liveness probe will return failure.
Note that in order for this to work correctly no requests should ever take longer
than the shutdown timeout (by default 30s).
This commit implements multi-tenancy. A tenant is defined based on a value
from JWT claims. The key of this value must be specified in the configuration
file. This allows us to pick different values when using multiple SSOs.
Let me explain more in depth how this works:
Cloud API gets a new compose request. Firstly, it extracts a tenant name from
JWT claims. The considered claims are configured as an array in
cloud_api.jwt.tenant_provider_fields in composer's config file. The channel
name for all jobs belonging to this compose is created by `"org-" + tenant`.
Why is the channel prefixed by "org-"? To give us options in the future. I can
imagine the request having a channel override. This basically means that
multiple tenants can share a channel. A real use-case for this is multiple
Fedora projects sharing one pool of workers.
Why this commit adds a whole new cloud_api section to the config? Because the
current config is a mess and we should stop adding new stuff into the koji
section. As the Koji API is basically deprecated, we will need to remove it
soon nevertheless.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add support for building images for the Azure marketplace: add a
new image type "azure-rhui" that can be used to build images
tailored to the Azure marketplace.
Add two sample manifests for 8.5 and 8.6, but note that even the
8.5 is using the 8.6 distro definitions. Also no image-info is
included since `image-info` cannot (yet) handle LVM setups and
the azure marketplace images use the LVM setup.
We may need to use several SSO providers, so extend our
configuration to allow that.
Based on PoC from Sanne:
```
package main
import (
"net/http"
"log"
"github.com/openshift-online/ocm-sdk-go/authentication"
"github.com/openshift-online/ocm-sdk-go/logging"
)
type H struct{}
func (h *H) ServeHTTP(w http.ResponseWriter, r *http.Request) {
log.Println("HURRAY")
}
func main() {
logBuilder := logging.NewGoLoggerBuilder()
logger, err := logBuilder.Build()
if err != nil {
panic(err)
}
aH, err := authentication.NewHandler().
KeysURL("https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/certs").
KeysURL("https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/certs").
Logger(logger).Next(&H{}).Build()
if err != nil {
panic(err)
}
log.Fatal(http.ListenAndServe(":8080", aH))
}
```
When backed by a DB, composer has no need of a queue directory.
This also addresses "Error moving artifacts for job" logging noise.
Signed-off-by: sanne <sanne.raymaekers@gmail.com>
The service is started via systemd activation sockets.
The service serves http POST requests, the same json as before is
expected as the body of the request, and the same json as before is sent
as the response of the request.
Mistakenly removed in 4577ac0717. Composer
itself does the authentication, not the gateway, therefore we do need
the auth exclude.
Added a comment to explain why it's attached to the api socket and not a
separate listener.
This is backwards compatible, as long as the timeout is 0 (never
timeout), which is the default.
In case of the dbjobqueue the underlying timeout is due to
context.Canceled, context.DeadlineExceeded, or net.Error with Timeout()
true. For the fsjobqueue only the first two are considered.
Because there's only a few combinations of upload types and image types
that make sense, enforce correct combinations by eliminating upload
types.
Fixes#1775
The image is not available via Weldr API, because it requires RHUI
client RPMs.
The content and configuration is based on RHEL-8.6 EC2 SAP image, since
there is no definition for the RHEL-9 SAP image yet.
Signed-off-by: Tomas Hozza <thozza@redhat.com>