Previous implementation of fsjobqueue is amazing but it has its drawbacks:
- dequeueing can be done only based on a job type
- it's limited to 100 jobs per a job type
As we soon want to be able to dequeue also by another criteria (job channel),
we need to refactor the queue.
The new implementation is more naive but also more flexible. It basically
works like the dbjobqueue - dequeueing goroutines listen for newly added
jobs. When that happens, a signal is sent to all of them and they all inspect
all pending jobs and dequeue ones that match their needs. Ones that don't find
a suitable job, are waiting for the next signal.
This is certainly slower implementation as every time a new job is added into
the queue, all dequeueing goroutines will have to iterate over all
pending jobs. I think that's fine because fsjobqueue is not recommended
to use for composer instances with heavy load.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
In the greenboot 0.13 release we updated packaging due to the increase
in new tests and it not making sense to have packaging so granular.
Signed-off-by: Peter Robinson <pbrobinson@gmail.com>
When the server is restarted the blueprint changes, which are only held
in memory, are lost. This checks for missing changes and returns an
error.
The test is also adjusted for the new error.
Related: rhbz#1922845
The blueprint name should never be empty, as it can cause other problems
like with the blueprints list results. Return an error if one is pushed
to the store, either as a blueprint commit or as a blueprint workspace.
Also adjusts the new test for the new error.
Related: rhbz#1922845
There is a problem with blueprint changes, once the server is restarted
the previous changes are all lost because they are not serialized to
disk.
This adds test fixture support so that new tests can be added before
fixing the problem. It adds store.FixtureOldChanges with blueprints
changes and empty blueprints.
Related: rhbz#1922845
- Any repository without package_sets is added to the general `Repos`
field of the DepsolveJob, just like before.
- Repositories with package_sets are added to the `PackageSetsRepos`
map, indexed by the package set names.
- Repositories defined in the customizations as `PayloadRepositories`
are considered to be associated only with the `PayloadPackageSets`
names from the image type and are added to the `PackageSetsRepos`
under the payload sets.
The repository collection and conversion of repository structs (from
Repository to RepoConfig) has been moved to a separate function.
Tags is an array of strings that associates repositories with package
sets. A repository tagged with a package set name will be used only for
the named package sets.
We have two fields, `Repos` and `PackageSets`. Renaming
`PackageSetsRepositories` to `PackageSetsRepos` for consistency.
The struct is for internal use only so the rename has no impact as long
as the serialised name is the same (json tag).
Also it's shorter.
Added docstring to the struct that explains the arguments in the same
way as they are described for the `depsolve()` function.
Changing the name of the argument in the internal `depsolve()` function
for the same reasons.
When deploying an ostree commit, specify a remote, currently hard-
coded to `rhel-edge`, so that updates work automatically, if they
are served from the same location as the initial commit is pulled
from.
NB: now that the remote is specified in the raw image, remove the
corresponding bits form the tests.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
For the brew use-case, we also need to build AWS images containing RHUI. This
commit is thus adding them.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Currently the job metrics are namespaced with the composer
subsystem, i.e. `composer_worker`. Since we plan to split
the components to their own namespaces in app interface,
the worker subsystem should be split too.
Fedora 33 is already EOL, therefore there is no point in supporting
image builds for it. Drop F33 from the distroregistry list and remove
F33 repositories definition.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Setting of the `crashkernel` option to the appropriate value is now done
by the `kexec-tools` package when installed and when any new kernel is
installed.
Regenerate relevant image test cases.
Fix#1819
Fix rhbz#2006692
Signed-off-by: Tomas Hozza <thozza@redhat.com>
This commit introduces the collection of error
metrics since it is now possible to differentiate
between internal errors and user input errors.
Additionally, the error status is reported for
job duration metrics.
For simplicity, the collection of the job metrics
was carried out in the job queue. This was only
being done in the dbqueue and not in the fsqueue.
This pr refactors the metric collection and moves
the job metrics to the worker server, by adding a
wrapper function to enqueueing jobs so that the
metrics only have to be recorded in one place when
queueing a job.
Refactor the current metric collection to make use
of re-usable functions, since some of the same queries
are repeated. This will also make it easier to move
the collection of metrics from the job queue.
The internal GCP package used `pkg.go.dev/google.golang.org/api` [1] to
interact with Compute Engine API. Modify the package to use the new and
idiomatic `pkg.go.dev/cloud.google.com/go` [2] library for interacting
with the Compute Engine API. The new library have been already used to
interact with the Cloudbuild and Storage APIs. The new library was not
used for Compute Engine since the beginning, because at that time, it
didn't support Compute Engine.
Update go.mod and vendored packages.
[1] https://github.com/googleapis/google-api-go-client
[2] https://github.com/googleapis/google-cloud-go
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Disable loging in via password authentication since this is an
official Amazon marketplace requirement
Linux-based AMIs must not allow SSH password authentication.
Disable password authentication via your sshd_config file by
setting PasswordAuthentication to NO.
Section "Security policies" from
https://docs.aws.amazon.com/marketplace/latest/userguide/product-and-ami-policies.html
Disable loging in via password authentication since this is an
official Amazon marketplace requirement
Linux-based AMIs must not allow SSH password authentication.
Disable password authentication via your sshd_config file by
setting PasswordAuthentication to NO.
Section "Security policies" from
https://docs.aws.amazon.com/marketplace/latest/userguide/product-and-ami-policies.html
Disable loging in via password authentication since this is an
official Amazon marketplace requirement
Linux-based AMIs must not allow SSH password authentication.
Disable password authentication via your sshd_config file by
setting PasswordAuthentication to NO.
Section "Security policies" from
https://docs.aws.amazon.com/marketplace/latest/userguide/product-and-ami-policies.html
Clean up some implementation aspects of the Fedora distro definition:
- Do not have default Fedora distro version and use `fedora` as the
package name in all places that use it, instead of `fedora33`.
- Fix bugs when wrong (Fedora 33) values were returned by `OSTreeRef()`
and `Releasever()` for newer Fedora releases.
- Test Fedora 35 in package unit tests.
- Add unit test for `OSTreeRef()` method.
- Use architecture name constants from `distro` package, instead of
string literals.
Fix#1802
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The QEMU assembler in Fedora distro definition for UEFI systems used
longer than allowed label for the VFAT filesystem of the EFI System
Partition. The maximum allowed label length is 11 characters.
This worked before with dosfstools, but in 2018, they added a label
validation [1]. This change got into the v4.2 release of dosfstools,
released in Jan 2021. And subsequently since F34, this new version of
dosfstools is present in Fedora repositories.
[1] ca54953476
Signed-off-by: Tomas Hozza <thozza@redhat.com>
This is for now only supported for koji builds.
A lot of code moved around for this, but functionally not much changed. PostCompose() now only parses the input, and queueing of all the jobs have been factored out in separate functions. PostCompose() is mostly agnostic to koji/non-koji requests.
Replace Job() and JobStatus() with typesafe versions, and introduce JobType()
for the rare instances where we don't know the type up front.
Additionally, catch a few more error cases:
- if OSBuildResult is nil, then we failed to invoke osbuild
- make sure the same JobResult handling is done for osbuild-koji, as for osbuild
This only extends the API, the backend can still only deal with composes of a single build.
I aimed to keep the API practically backwards compatible, i.e., no current consumer of it should notice the change. I hope I didn't mess that up.
fixup: image statuses
In addition to individual image status, have an
overall status that captures success or failure
of the compose as a whole.
This is not as fine grained, and only distinguishes
between "pending", "failure" and "success".
This captures other jobs than the image builds, which
is relevant for the koji composes, which consists also
of koji-init and koji-finalize, in addition to the build
jobs.
For now upload requests are required if and only if we are not
using koji. When using the koji integration the produced artifacts
are uploaded to koji only. In the future we may want to support
also uploading to the cloud providers.
Extend the compose endpoints to have minimal koji support.
This is intended to replace the current koji API so that it
can be consumed through api.openshift.com.
We may need to use several SSO providers, so extend our
configuration to allow that.
Based on PoC from Sanne:
```
package main
import (
"net/http"
"log"
"github.com/openshift-online/ocm-sdk-go/authentication"
"github.com/openshift-online/ocm-sdk-go/logging"
)
type H struct{}
func (h *H) ServeHTTP(w http.ResponseWriter, r *http.Request) {
log.Println("HURRAY")
}
func main() {
logBuilder := logging.NewGoLoggerBuilder()
logger, err := logBuilder.Build()
if err != nil {
panic(err)
}
aH, err := authentication.NewHandler().
KeysURL("https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/certs").
KeysURL("https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/certs").
Logger(logger).Next(&H{}).Build()
if err != nil {
panic(err)
}
log.Fatal(http.ListenAndServe(":8080", aH))
}
```