go.mod: update osbuild/images to v0.151.0

tag v0.149.0
Tagger: imagebuilder-bot <imagebuilder-bots+imagebuilder-bot@redhat.com>

Changes with 0.149.0

----------------
  * Update dependencies 2025-05-25 (osbuild/images#1560)
    * Author: SchutzBot, Reviewers: Simon de Vlieger, Tomáš Hozza
  * Update osbuild dependency commit ID to latest (osbuild/images#1522)
    * Author: SchutzBot, Reviewers: Simon de Vlieger, Tomáš Hozza
  * Update snapshots to 20250515 (osbuild/images#1524)
    * Author: SchutzBot, Reviewers: Simon de Vlieger, Tomáš Hozza
  * `vagrant-libvirt` implementation (HMS-6116) (osbuild/images#1548)
    * Author: Simon de Vlieger, Reviewers: Achilleas Koutsou, Tomáš Hozza
  * fedora: tweaks after all imageTypes are YAML (osbuild/images#1518)
    * Author: Michael Vogt, Reviewers: Simon de Vlieger, Tomáš Hozza
  * gha: do not break gobump output (osbuild/images#1561)
    * Author: Lukáš Zapletal, Reviewers: Simon de Vlieger, Tomáš Hozza
  * repositories: AlmaLinux 10 (osbuild/images#1567)
    * Author: Simon de Vlieger, Reviewers: Achilleas Koutsou, Lukáš Zapletal, Neal Gompa (ニール・ゴンパ)
  * vagrant: image config for default vagrant user (HMS-6116) (osbuild/images#1565)
    * Author: Simon de Vlieger, Reviewers: Achilleas Koutsou, Michael Vogt

— Somewhere on the Internet, 2025-05-27

---

tag v0.150.0
Tagger: imagebuilder-bot <imagebuilder-bots+imagebuilder-bot@redhat.com>

Changes with 0.150.0

----------------
  * Replace hardcoded kickstart %post scripts with new stage options and bootc switch with custom kickstart content (HMS-6051) (osbuild/images#1527)
    * Author: Achilleas Koutsou, Reviewers: Simon de Vlieger, Tomáš Hozza
  * test: install yamllint for tests (osbuild/images#1572)
    * Author: Achilleas Koutsou, Reviewers: Lukáš Zapletal, Simon de Vlieger, Tomáš Hozza

— Somewhere on the Internet, 2025-06-02

---

tag v0.151.0
Tagger: imagebuilder-bot <imagebuilder-bots+imagebuilder-bot@redhat.com>

Changes with 0.151.0

----------------
  * Introduce new Azure CVM image type (HMS-5636) (osbuild/images#1318)
    * Author: Achilleas Koutsou, Reviewers: Nobody
  * Many: support using string with unit for byte-sized partitioning fields in YAML distro definitions (osbuild/images#1579)
    * Author: Tomáš Hozza, Reviewers: Achilleas Koutsou, Brian C. Lane
  * Update osbuild dependency commit ID to latest (osbuild/images#1587)
    * Author: SchutzBot, Reviewers: Achilleas Koutsou, Tomáš Hozza
  * Update snapshots to 20250601 (osbuild/images#1573)
    * Author: SchutzBot, Reviewers: Achilleas Koutsou, Lukáš Zapletal
  * bootc: Make installed rootfs configurable (osbuild/images#1555)
    * Author: Mbarak Bujra, Reviewers: Michael Vogt, Tomáš Hozza
  * distro: create new ImageConfig.DNFConfig (osbuild/images#1583)
    * Author: Michael Vogt, Reviewers: Simon de Vlieger, Tomáš Hozza
  * distro: make "fedora" a "generic" distro (osbuild/images#1563)
    * Author: Michael Vogt, Reviewers: Nobody
  * image: If using a separate build container, copy bootc customization to it (osbuild/images#1571)
    * Author: Alexander Larsson, Reviewers: Achilleas Koutsou, Tomáš Hozza
  * manifest/ostree: explicitly include shadow-utils (osbuild/images#1585)
    * Author: Simon de Vlieger, Reviewers: Achilleas Koutsou, Michael Vogt
  * osbuild/tar: explicit compression (HMS-8573, HMS-6116) (osbuild/images#1581)
    * Author: Simon de Vlieger, Reviewers: Achilleas Koutsou, Tomáš Hozza
  * tests: bump fedora versions to 41 (osbuild/images#1438)
    * Author: Lukáš Zapletal, Reviewers: Brian C. Lane, Michael Vogt

— Somewhere on the Internet, 2025-06-09

---
This commit is contained in:
Achilleas Koutsou 2025-06-10 15:43:18 +02:00 committed by Gianluca Zuccarelli
parent cedc351bbd
commit deccaf9548
82 changed files with 2844 additions and 1175 deletions

8
go.mod
View file

@ -46,7 +46,7 @@ require (
github.com/openshift-online/ocm-sdk-go v0.1.438 github.com/openshift-online/ocm-sdk-go v0.1.438
github.com/oracle/oci-go-sdk/v54 v54.0.0 github.com/oracle/oci-go-sdk/v54 v54.0.0
github.com/osbuild/blueprint v1.6.0 github.com/osbuild/blueprint v1.6.0
github.com/osbuild/images v0.148.0 github.com/osbuild/images v0.151.0
github.com/osbuild/osbuild-composer/pkg/splunk_logger v0.0.0-20240814102216-0239db53236d github.com/osbuild/osbuild-composer/pkg/splunk_logger v0.0.0-20240814102216-0239db53236d
github.com/osbuild/pulp-client v0.1.0 github.com/osbuild/pulp-client v0.1.0
github.com/prometheus/client_golang v1.20.5 github.com/prometheus/client_golang v1.20.5
@ -114,11 +114,11 @@ require (
github.com/containerd/errdefs/pkg v0.3.0 // indirect github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.16.3 // indirect github.com/containerd/stargz-snapshotter/estargz v0.16.3 // indirect
github.com/containerd/typeurl/v2 v2.2.3 // indirect github.com/containerd/typeurl/v2 v2.2.3 // indirect
github.com/containers/common v0.62.0 // indirect github.com/containers/common v0.62.3 // indirect
github.com/containers/image/v5 v5.34.0 // indirect github.com/containers/image/v5 v5.34.3 // indirect
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 // indirect github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 // indirect
github.com/containers/ocicrypt v1.2.1 // indirect github.com/containers/ocicrypt v1.2.1 // indirect
github.com/containers/storage v1.57.1 // indirect github.com/containers/storage v1.57.2 // indirect
github.com/coreos/go-semver v0.3.1 // indirect github.com/coreos/go-semver v0.3.1 // indirect
github.com/cyberphone/json-canonicalization v0.0.0-20231217050601-ba74d44ecf5f // indirect github.com/cyberphone/json-canonicalization v0.0.0-20231217050601-ba74d44ecf5f // indirect
github.com/cyphar/filepath-securejoin v0.3.6 // indirect github.com/cyphar/filepath-securejoin v0.3.6 // indirect

16
go.sum
View file

@ -180,16 +180,16 @@ github.com/containerd/stargz-snapshotter/estargz v0.16.3 h1:7evrXtoh1mSbGj/pfRcc
github.com/containerd/stargz-snapshotter/estargz v0.16.3/go.mod h1:uyr4BfYfOj3G9WBVE8cOlQmXAbPN9VEQpBBeJIuOipU= github.com/containerd/stargz-snapshotter/estargz v0.16.3/go.mod h1:uyr4BfYfOj3G9WBVE8cOlQmXAbPN9VEQpBBeJIuOipU=
github.com/containerd/typeurl/v2 v2.2.3 h1:yNA/94zxWdvYACdYO8zofhrTVuQY73fFU1y++dYSw40= github.com/containerd/typeurl/v2 v2.2.3 h1:yNA/94zxWdvYACdYO8zofhrTVuQY73fFU1y++dYSw40=
github.com/containerd/typeurl/v2 v2.2.3/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk= github.com/containerd/typeurl/v2 v2.2.3/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk=
github.com/containers/common v0.62.0 h1:Sl9WE5h7Y/F3bejrMAA4teP1EcY9ygqJmW4iwSloZ10= github.com/containers/common v0.62.3 h1:aOGryqXfW6aKBbHbqOveH7zB+ihavUN03X/2pUSvWFI=
github.com/containers/common v0.62.0/go.mod h1:Yec+z8mrSq4rydHofrnDCBqAcNA/BGrSg1kfFUL6F6s= github.com/containers/common v0.62.3/go.mod h1:3R8kDox2prC9uj/a2hmXj/YjZz5sBEUNrcDiw51S0Lo=
github.com/containers/image/v5 v5.34.0 h1:HPqQaDUsox/3mC1pbOyLAIQEp0JhQqiUZ+6JiFIZLDI= github.com/containers/image/v5 v5.34.3 h1:/cMgfyA4Y7ILH7nzWP/kqpkE5Df35Ek4bp5ZPvJOVmI=
github.com/containers/image/v5 v5.34.0/go.mod h1:/WnvUSEfdqC/ahMRd4YJDBLrpYWkGl018rB77iB3FDo= github.com/containers/image/v5 v5.34.3/go.mod h1:MG++slvQSZVq5ejAcLdu4APGsKGMb0YHHnAo7X28fdE=
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 h1:Qzk5C6cYglewc+UyGf6lc8Mj2UaPTHy/iF2De0/77CA= github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01 h1:Qzk5C6cYglewc+UyGf6lc8Mj2UaPTHy/iF2De0/77CA=
github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01/go.mod h1:9rfv8iPl1ZP7aqh9YA68wnZv2NUDbXdcdPHVz0pFbPY= github.com/containers/libtrust v0.0.0-20230121012942-c1716e8a8d01/go.mod h1:9rfv8iPl1ZP7aqh9YA68wnZv2NUDbXdcdPHVz0pFbPY=
github.com/containers/ocicrypt v1.2.1 h1:0qIOTT9DoYwcKmxSt8QJt+VzMY18onl9jUXsxpVhSmM= github.com/containers/ocicrypt v1.2.1 h1:0qIOTT9DoYwcKmxSt8QJt+VzMY18onl9jUXsxpVhSmM=
github.com/containers/ocicrypt v1.2.1/go.mod h1:aD0AAqfMp0MtwqWgHM1bUwe1anx0VazI108CRrSKINQ= github.com/containers/ocicrypt v1.2.1/go.mod h1:aD0AAqfMp0MtwqWgHM1bUwe1anx0VazI108CRrSKINQ=
github.com/containers/storage v1.57.1 h1:hKPoFsuBcB3qTzBxa4IFpZMRzUuL5Xhv/BE44W0XHx8= github.com/containers/storage v1.57.2 h1:2roCtTyE9pzIaBDHibK72DTnYkPmwWaq5uXxZdaWK4U=
github.com/containers/storage v1.57.1/go.mod h1:i/Hb4lu7YgFr9G0K6BMjqW0BLJO1sFsnWQwj2UoWCUM= github.com/containers/storage v1.57.2/go.mod h1:i/Hb4lu7YgFr9G0K6BMjqW0BLJO1sFsnWQwj2UoWCUM=
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4= github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec= github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
@ -579,8 +579,8 @@ github.com/oracle/oci-go-sdk/v54 v54.0.0 h1:CDLjeSejv2aDpElAJrhKpi6zvT/zhZCZuXch
github.com/oracle/oci-go-sdk/v54 v54.0.0/go.mod h1:+t+yvcFGVp+3ZnztnyxqXfQDsMlq8U25faBLa+mqCMc= github.com/oracle/oci-go-sdk/v54 v54.0.0/go.mod h1:+t+yvcFGVp+3ZnztnyxqXfQDsMlq8U25faBLa+mqCMc=
github.com/osbuild/blueprint v1.6.0 h1:HUV1w/dMxpgqOgVtHhfTZE3zRmWQkuW/qTfx9smKImI= github.com/osbuild/blueprint v1.6.0 h1:HUV1w/dMxpgqOgVtHhfTZE3zRmWQkuW/qTfx9smKImI=
github.com/osbuild/blueprint v1.6.0/go.mod h1:0d3dlY8aSJ6jM6NHwBmJFF1VIySsp/GsDpcJQ0yrOqM= github.com/osbuild/blueprint v1.6.0/go.mod h1:0d3dlY8aSJ6jM6NHwBmJFF1VIySsp/GsDpcJQ0yrOqM=
github.com/osbuild/images v0.148.0 h1:jRLpl/z50FF7Vylio7oD7GddKftiqf2RZZV1h5U8XhI= github.com/osbuild/images v0.151.0 h1:r+8xbz0FGyUskl996eObrgymEqgLWwhtVa23Pj0Zp8U=
github.com/osbuild/images v0.148.0/go.mod h1:jY21PhkxIozII4M0xCqZL7poLtFwDJlEGj88pb3lalQ= github.com/osbuild/images v0.151.0/go.mod h1:ZiEO1WWKuRvPSaiXsmqn+7krAIZ+qXiiOfBQed0H7lY=
github.com/osbuild/osbuild-composer/pkg/splunk_logger v0.0.0-20240814102216-0239db53236d h1:r9BFPDv0uuA9k1947Jybcxs36c/pTywWS1gjeizvtcQ= github.com/osbuild/osbuild-composer/pkg/splunk_logger v0.0.0-20240814102216-0239db53236d h1:r9BFPDv0uuA9k1947Jybcxs36c/pTywWS1gjeizvtcQ=
github.com/osbuild/osbuild-composer/pkg/splunk_logger v0.0.0-20240814102216-0239db53236d/go.mod h1:zR1iu/hOuf+OQNJlk70tju9IqzzM4ycq0ectkFBm94U= github.com/osbuild/osbuild-composer/pkg/splunk_logger v0.0.0-20240814102216-0239db53236d/go.mod h1:zR1iu/hOuf+OQNJlk70tju9IqzzM4ycq0ectkFBm94U=
github.com/osbuild/pulp-client v0.1.0 h1:L0C4ezBJGTamN3BKdv+rKLuq/WxXJbsFwz/Hj7aEmJ8= github.com/osbuild/pulp-client v0.1.0 h1:L0C4ezBJGTamN3BKdv+rKLuq/WxXJbsFwz/Hj7aEmJ8=

View file

@ -15,7 +15,6 @@ import (
"github.com/osbuild/blueprint/pkg/blueprint" "github.com/osbuild/blueprint/pkg/blueprint"
"github.com/osbuild/images/pkg/arch" "github.com/osbuild/images/pkg/arch"
"github.com/osbuild/images/pkg/distro" "github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distro/fedora"
"github.com/osbuild/images/pkg/distro/test_distro" "github.com/osbuild/images/pkg/distro/test_distro"
"github.com/osbuild/images/pkg/distrofactory" "github.com/osbuild/images/pkg/distrofactory"
"github.com/osbuild/images/pkg/rpmmd" "github.com/osbuild/images/pkg/rpmmd"
@ -308,7 +307,7 @@ func Test_upgrade(t *testing.T) {
cleanup := setupTestHostDistro("fedora-37", arch.ARCH_X86_64.String()) cleanup := setupTestHostDistro("fedora-37", arch.ARCH_X86_64.String())
t.Cleanup(cleanup) t.Cleanup(cleanup)
factory := distrofactory.New(fedora.DistroFactory) factory := distrofactory.NewDefault()
store := newStoreFromV0(storeStruct, factory, nil) store := newStoreFromV0(storeStruct, factory, nil)
assert.Equal(1, len(store.blueprints)) assert.Equal(1, len(store.blueprints))

View file

@ -108,19 +108,10 @@ func (f *fulcioTrustRoot) verifyFulcioCertificateAtTime(relevantTime time.Time,
} }
} }
untrustedLeafCerts, err := cryptoutils.UnmarshalCertificatesFromPEM(untrustedCertificateBytes) untrustedCertificate, err := parseLeafCertFromPEM(untrustedCertificateBytes)
if err != nil { if err != nil {
return nil, internal.NewInvalidSignatureError(fmt.Sprintf("parsing leaf certificate: %v", err)) return nil, err
} }
switch len(untrustedLeafCerts) {
case 0:
return nil, internal.NewInvalidSignatureError("no certificate found in signature certificate data")
case 1:
break // OK
default:
return nil, internal.NewInvalidSignatureError("unexpected multiple certificates present in signature certificate data")
}
untrustedCertificate := untrustedLeafCerts[0]
// Go rejects Subject Alternative Name that has no DNSNames, EmailAddresses, IPAddresses and URIs; // Go rejects Subject Alternative Name that has no DNSNames, EmailAddresses, IPAddresses and URIs;
// we match SAN ourselves, so override that. // we match SAN ourselves, so override that.
@ -195,6 +186,21 @@ func (f *fulcioTrustRoot) verifyFulcioCertificateAtTime(relevantTime time.Time,
return untrustedCertificate.PublicKey, nil return untrustedCertificate.PublicKey, nil
} }
func parseLeafCertFromPEM(untrustedCertificateBytes []byte) (*x509.Certificate, error) {
untrustedLeafCerts, err := cryptoutils.UnmarshalCertificatesFromPEM(untrustedCertificateBytes)
if err != nil {
return nil, internal.NewInvalidSignatureError(fmt.Sprintf("parsing leaf certificate: %v", err))
}
switch len(untrustedLeafCerts) {
case 0:
return nil, internal.NewInvalidSignatureError("no certificate found in signature certificate data")
case 1: // OK
return untrustedLeafCerts[0], nil
default:
return nil, internal.NewInvalidSignatureError("unexpected multiple certificates present in signature certificate data")
}
}
func verifyRekorFulcio(rekorPublicKeys []*ecdsa.PublicKey, fulcioTrustRoot *fulcioTrustRoot, untrustedRekorSET []byte, func verifyRekorFulcio(rekorPublicKeys []*ecdsa.PublicKey, fulcioTrustRoot *fulcioTrustRoot, untrustedRekorSET []byte,
untrustedCertificateBytes []byte, untrustedIntermediateChainBytes []byte, untrustedBase64Signature string, untrustedCertificateBytes []byte, untrustedIntermediateChainBytes []byte, untrustedBase64Signature string,
untrustedPayloadBytes []byte) (crypto.PublicKey, error) { untrustedPayloadBytes []byte) (crypto.PublicKey, error) {

View file

@ -0,0 +1,74 @@
package signature
import (
"crypto"
"crypto/x509"
"errors"
"fmt"
"slices"
"github.com/containers/image/v5/signature/internal"
"github.com/sigstore/sigstore/pkg/cryptoutils"
)
type pkiTrustRoot struct {
caRootsCertificates *x509.CertPool
caIntermediateCertificates *x509.CertPool
subjectEmail string
subjectHostname string
}
func (p *pkiTrustRoot) validate() error {
if p.subjectEmail == "" && p.subjectHostname == "" {
return errors.New("Internal inconsistency: PKI use set up without subject email or subject hostname")
}
return nil
}
func verifyPKI(pkiTrustRoot *pkiTrustRoot, untrustedCertificateBytes []byte, untrustedIntermediateChainBytes []byte) (crypto.PublicKey, error) {
var untrustedIntermediatePool *x509.CertPool
if pkiTrustRoot.caIntermediateCertificates != nil {
untrustedIntermediatePool = pkiTrustRoot.caIntermediateCertificates.Clone()
} else {
untrustedIntermediatePool = x509.NewCertPool()
}
if len(untrustedIntermediateChainBytes) > 0 {
untrustedIntermediateChain, err := cryptoutils.UnmarshalCertificatesFromPEM(untrustedIntermediateChainBytes)
if err != nil {
return nil, internal.NewInvalidSignatureError(fmt.Sprintf("loading certificate chain: %v", err))
}
if len(untrustedIntermediateChain) > 1 {
for _, untrustedIntermediateCert := range untrustedIntermediateChain[:len(untrustedIntermediateChain)-1] {
untrustedIntermediatePool.AddCert(untrustedIntermediateCert)
}
}
}
untrustedCertificate, err := parseLeafCertFromPEM(untrustedCertificateBytes)
if err != nil {
return nil, err
}
if _, err := untrustedCertificate.Verify(x509.VerifyOptions{
Intermediates: untrustedIntermediatePool,
Roots: pkiTrustRoot.caRootsCertificates,
KeyUsages: []x509.ExtKeyUsage{x509.ExtKeyUsageCodeSigning},
}); err != nil {
return nil, internal.NewInvalidSignatureError(fmt.Sprintf("veryfing leaf certificate failed: %v", err))
}
if pkiTrustRoot.subjectEmail != "" {
if !slices.Contains(untrustedCertificate.EmailAddresses, pkiTrustRoot.subjectEmail) {
return nil, internal.NewInvalidSignatureError(fmt.Sprintf("Required email %q not found (got %q)",
pkiTrustRoot.subjectEmail,
untrustedCertificate.EmailAddresses))
}
}
if pkiTrustRoot.subjectHostname != "" {
if err = untrustedCertificate.VerifyHostname(pkiTrustRoot.subjectHostname); err != nil {
return nil, internal.NewInvalidSignatureError(fmt.Sprintf("Unexpected subject hostname: %v", err))
}
}
return untrustedCertificate.PublicKey, nil
}

View file

@ -71,6 +71,17 @@ func PRSigstoreSignedWithFulcio(fulcio PRSigstoreSignedFulcio) PRSigstoreSignedO
} }
} }
// PRSigstoreSignedWithPKI specifies a value for the "pki" field when calling NewPRSigstoreSigned.
func PRSigstoreSignedWithPKI(p PRSigstoreSignedPKI) PRSigstoreSignedOption {
return func(pr *prSigstoreSigned) error {
if pr.PKI != nil {
return InvalidPolicyFormatError(`"pki" already specified`)
}
pr.PKI = p
return nil
}
}
// PRSigstoreSignedWithRekorPublicKeyPath specifies a value for the "rekorPublicKeyPath" field when calling NewPRSigstoreSigned. // PRSigstoreSignedWithRekorPublicKeyPath specifies a value for the "rekorPublicKeyPath" field when calling NewPRSigstoreSigned.
func PRSigstoreSignedWithRekorPublicKeyPath(rekorPublicKeyPath string) PRSigstoreSignedOption { func PRSigstoreSignedWithRekorPublicKeyPath(rekorPublicKeyPath string) PRSigstoreSignedOption {
return func(pr *prSigstoreSigned) error { return func(pr *prSigstoreSigned) error {
@ -159,8 +170,11 @@ func newPRSigstoreSigned(options ...PRSigstoreSignedOption) (*prSigstoreSigned,
if res.Fulcio != nil { if res.Fulcio != nil {
keySources++ keySources++
} }
if res.PKI != nil {
keySources++
}
if keySources != 1 { if keySources != 1 {
return nil, InvalidPolicyFormatError("exactly one of keyPath, keyPaths, keyData, keyDatas and fulcio must be specified") return nil, InvalidPolicyFormatError("exactly one of keyPath, keyPaths, keyData, keyDatas, fulcio, and pki must be specified")
} }
rekorSources := 0 rekorSources := 0
@ -182,6 +196,9 @@ func newPRSigstoreSigned(options ...PRSigstoreSignedOption) (*prSigstoreSigned,
if res.Fulcio != nil && rekorSources == 0 { if res.Fulcio != nil && rekorSources == 0 {
return nil, InvalidPolicyFormatError("At least one of rekorPublickeyPath, rekorPublicKeyPaths, rekorPublickeyData and rekorPublicKeyDatas must be specified if fulcio is used") return nil, InvalidPolicyFormatError("At least one of rekorPublickeyPath, rekorPublicKeyPaths, rekorPublickeyData and rekorPublicKeyDatas must be specified if fulcio is used")
} }
if res.PKI != nil && rekorSources > 0 {
return nil, InvalidPolicyFormatError("rekorPublickeyPath, rekorPublicKeyPaths, rekorPublickeyData and rekorPublicKeyDatas are not supported for pki")
}
if res.SignedIdentity == nil { if res.SignedIdentity == nil {
return nil, InvalidPolicyFormatError("signedIdentity not specified") return nil, InvalidPolicyFormatError("signedIdentity not specified")
@ -218,9 +235,10 @@ var _ json.Unmarshaler = (*prSigstoreSigned)(nil)
func (pr *prSigstoreSigned) UnmarshalJSON(data []byte) error { func (pr *prSigstoreSigned) UnmarshalJSON(data []byte) error {
*pr = prSigstoreSigned{} *pr = prSigstoreSigned{}
var tmp prSigstoreSigned var tmp prSigstoreSigned
var gotKeyPath, gotKeyPaths, gotKeyData, gotKeyDatas, gotFulcio bool var gotKeyPath, gotKeyPaths, gotKeyData, gotKeyDatas, gotFulcio, gotPKI bool
var gotRekorPublicKeyPath, gotRekorPublicKeyPaths, gotRekorPublicKeyData, gotRekorPublicKeyDatas bool var gotRekorPublicKeyPath, gotRekorPublicKeyPaths, gotRekorPublicKeyData, gotRekorPublicKeyDatas bool
var fulcio prSigstoreSignedFulcio var fulcio prSigstoreSignedFulcio
var pki prSigstoreSignedPKI
var signedIdentity json.RawMessage var signedIdentity json.RawMessage
if err := internal.ParanoidUnmarshalJSONObject(data, func(key string) any { if err := internal.ParanoidUnmarshalJSONObject(data, func(key string) any {
switch key { switch key {
@ -253,6 +271,9 @@ func (pr *prSigstoreSigned) UnmarshalJSON(data []byte) error {
case "rekorPublicKeyDatas": case "rekorPublicKeyDatas":
gotRekorPublicKeyDatas = true gotRekorPublicKeyDatas = true
return &tmp.RekorPublicKeyDatas return &tmp.RekorPublicKeyDatas
case "pki":
gotPKI = true
return &pki
case "signedIdentity": case "signedIdentity":
return &signedIdentity return &signedIdentity
default: default:
@ -303,6 +324,9 @@ func (pr *prSigstoreSigned) UnmarshalJSON(data []byte) error {
if gotRekorPublicKeyDatas { if gotRekorPublicKeyDatas {
opts = append(opts, PRSigstoreSignedWithRekorPublicKeyDatas(tmp.RekorPublicKeyDatas)) opts = append(opts, PRSigstoreSignedWithRekorPublicKeyDatas(tmp.RekorPublicKeyDatas))
} }
if gotPKI {
opts = append(opts, PRSigstoreSignedWithPKI(&pki))
}
opts = append(opts, PRSigstoreSignedWithSignedIdentity(tmp.SignedIdentity)) opts = append(opts, PRSigstoreSignedWithSignedIdentity(tmp.SignedIdentity))
res, err := newPRSigstoreSigned(opts...) res, err := newPRSigstoreSigned(opts...)
@ -440,3 +464,167 @@ func (f *prSigstoreSignedFulcio) UnmarshalJSON(data []byte) error {
*f = *res *f = *res
return nil return nil
} }
// PRSigstoreSignedPKIOption is a way to pass values to NewPRSigstoreSignedPKI
type PRSigstoreSignedPKIOption func(*prSigstoreSignedPKI) error
// PRSigstoreSignedPKIWithCARootsPath specifies a value for the "caRootsPath" field when calling NewPRSigstoreSignedPKI
func PRSigstoreSignedPKIWithCARootsPath(caRootsPath string) PRSigstoreSignedPKIOption {
return func(p *prSigstoreSignedPKI) error {
if p.CARootsPath != "" {
return InvalidPolicyFormatError(`"caRootsPath" already specified`)
}
p.CARootsPath = caRootsPath
return nil
}
}
// PRSigstoreSignedPKIWithCARootsData specifies a value for the "caRootsData" field when calling NewPRSigstoreSignedPKI
func PRSigstoreSignedPKIWithCARootsData(caRootsData []byte) PRSigstoreSignedPKIOption {
return func(p *prSigstoreSignedPKI) error {
if p.CARootsData != nil {
return InvalidPolicyFormatError(`"caRootsData" already specified`)
}
p.CARootsData = caRootsData
return nil
}
}
// PRSigstoreSignedPKIWithCAIntermediatesPath specifies a value for the "caIntermediatesPath" field when calling NewPRSigstoreSignedPKI
func PRSigstoreSignedPKIWithCAIntermediatesPath(caIntermediatesPath string) PRSigstoreSignedPKIOption {
return func(p *prSigstoreSignedPKI) error {
if p.CAIntermediatesPath != "" {
return InvalidPolicyFormatError(`"caIntermediatesPath" already specified`)
}
p.CAIntermediatesPath = caIntermediatesPath
return nil
}
}
// PRSigstoreSignedPKIWithCAIntermediatesData specifies a value for the "caIntermediatesData" field when calling NewPRSigstoreSignedPKI
func PRSigstoreSignedPKIWithCAIntermediatesData(caIntermediatesData []byte) PRSigstoreSignedPKIOption {
return func(p *prSigstoreSignedPKI) error {
if p.CAIntermediatesData != nil {
return InvalidPolicyFormatError(`"caIntermediatesData" already specified`)
}
p.CAIntermediatesData = caIntermediatesData
return nil
}
}
// PRSigstoreSignedPKIWithSubjectEmail specifies a value for the "subjectEmail" field when calling NewPRSigstoreSignedPKI
func PRSigstoreSignedPKIWithSubjectEmail(subjectEmail string) PRSigstoreSignedPKIOption {
return func(p *prSigstoreSignedPKI) error {
if p.SubjectEmail != "" {
return InvalidPolicyFormatError(`"subjectEmail" already specified`)
}
p.SubjectEmail = subjectEmail
return nil
}
}
// PRSigstoreSignedPKIWithSubjectHostname specifies a value for the "subjectHostname" field when calling NewPRSigstoreSignedPKI
func PRSigstoreSignedPKIWithSubjectHostname(subjectHostname string) PRSigstoreSignedPKIOption {
return func(p *prSigstoreSignedPKI) error {
if p.SubjectHostname != "" {
return InvalidPolicyFormatError(`"subjectHostname" already specified`)
}
p.SubjectHostname = subjectHostname
return nil
}
}
// newPRSigstoreSignedPKI is NewPRSigstoreSignedPKI, except it returns the private type
func newPRSigstoreSignedPKI(options ...PRSigstoreSignedPKIOption) (*prSigstoreSignedPKI, error) {
res := prSigstoreSignedPKI{}
for _, o := range options {
if err := o(&res); err != nil {
return nil, err
}
}
if res.CARootsPath != "" && res.CARootsData != nil {
return nil, InvalidPolicyFormatError("caRootsPath and caRootsData cannot be used simultaneously")
}
if res.CARootsPath == "" && res.CARootsData == nil {
return nil, InvalidPolicyFormatError("At least one of caRootsPath and caRootsData must be specified")
}
if res.CAIntermediatesPath != "" && res.CAIntermediatesData != nil {
return nil, InvalidPolicyFormatError("caIntermediatesPath and caIntermediatesData cannot be used simultaneously")
}
if res.SubjectEmail == "" && res.SubjectHostname == "" {
return nil, InvalidPolicyFormatError("At least one of subjectEmail, subjectHostname must be specified")
}
return &res, nil
}
// NewPRSigstoreSignedPKI returns a PRSigstoreSignedPKI based on options.
func NewPRSigstoreSignedPKI(options ...PRSigstoreSignedPKIOption) (PRSigstoreSignedPKI, error) {
return newPRSigstoreSignedPKI(options...)
}
// Compile-time check that prSigstoreSignedPKI implements json.Unmarshaler.
var _ json.Unmarshaler = (*prSigstoreSignedPKI)(nil)
func (p *prSigstoreSignedPKI) UnmarshalJSON(data []byte) error {
*p = prSigstoreSignedPKI{}
var tmp prSigstoreSignedPKI
var gotCARootsPath, gotCARootsData, gotCAIntermediatesPath, gotCAIntermediatesData, gotSubjectEmail, gotSubjectHostname bool
if err := internal.ParanoidUnmarshalJSONObject(data, func(key string) any {
switch key {
case "caRootsPath":
gotCARootsPath = true
return &tmp.CARootsPath
case "caRootsData":
gotCARootsData = true
return &tmp.CARootsData
case "caIntermediatesPath":
gotCAIntermediatesPath = true
return &tmp.CAIntermediatesPath
case "caIntermediatesData":
gotCAIntermediatesData = true
return &tmp.CAIntermediatesData
case "subjectEmail":
gotSubjectEmail = true
return &tmp.SubjectEmail
case "subjectHostname":
gotSubjectHostname = true
return &tmp.SubjectHostname
default:
return nil
}
}); err != nil {
return err
}
var opts []PRSigstoreSignedPKIOption
if gotCARootsPath {
opts = append(opts, PRSigstoreSignedPKIWithCARootsPath(tmp.CARootsPath))
}
if gotCARootsData {
opts = append(opts, PRSigstoreSignedPKIWithCARootsData(tmp.CARootsData))
}
if gotCAIntermediatesPath {
opts = append(opts, PRSigstoreSignedPKIWithCAIntermediatesPath(tmp.CAIntermediatesPath))
}
if gotCAIntermediatesData {
opts = append(opts, PRSigstoreSignedPKIWithCAIntermediatesData(tmp.CAIntermediatesData))
}
if gotSubjectEmail {
opts = append(opts, PRSigstoreSignedPKIWithSubjectEmail(tmp.SubjectEmail))
}
if gotSubjectHostname {
opts = append(opts, PRSigstoreSignedPKIWithSubjectHostname(tmp.SubjectHostname))
}
res, err := newPRSigstoreSignedPKI(opts...)
if err != nil {
return err
}
*p = *res
return nil
}

View file

@ -97,11 +97,64 @@ func (f *prSigstoreSignedFulcio) prepareTrustRoot() (*fulcioTrustRoot, error) {
return &fulcio, nil return &fulcio, nil
} }
// prepareTrustRoot creates a pkiTrustRoot from the input data.
// (This also prevents external implementations of this interface, ensuring that prSigstoreSignedPKI is the only one.)
func (p *prSigstoreSignedPKI) prepareTrustRoot() (*pkiTrustRoot, error) {
caRootsCertPEMs, err := loadBytesFromConfigSources(configBytesSources{
inconsistencyErrorMessage: `Internal inconsistency: both "caRootsPath" and "caRootsData" specified`,
path: p.CARootsPath,
data: p.CARootsData,
})
if err != nil {
return nil, err
}
if len(caRootsCertPEMs) != 1 {
return nil, errors.New(`Internal inconsistency: PKI specified with not exactly one of "caRootsPath" nor "caRootsData"`)
}
rootsCerts := x509.NewCertPool()
if ok := rootsCerts.AppendCertsFromPEM(caRootsCertPEMs[0]); !ok {
return nil, errors.New("error loading PKI CA Roots certificates")
}
pki := pkiTrustRoot{
caRootsCertificates: rootsCerts,
subjectEmail: p.SubjectEmail,
subjectHostname: p.SubjectHostname,
}
caIntermediateCertPEMs, err := loadBytesFromConfigSources(configBytesSources{
inconsistencyErrorMessage: `Internal inconsistency: both "caIntermediatesPath" and "caIntermediatesData" specified`,
path: p.CAIntermediatesPath,
data: p.CAIntermediatesData,
})
if err != nil {
return nil, err
}
if caIntermediateCertPEMs != nil {
if len(caIntermediateCertPEMs) != 1 {
return nil, errors.New(`Internal inconsistency: PKI specified with invalid value from "caIntermediatesPath" or "caIntermediatesData"`)
}
intermediatePool := x509.NewCertPool()
trustedIntermediates, err := cryptoutils.UnmarshalCertificatesFromPEM(caIntermediateCertPEMs[0])
if err != nil {
return nil, internal.NewInvalidSignatureError(fmt.Sprintf("loading trusted intermediate certificates: %v", err))
}
for _, trustedIntermediateCert := range trustedIntermediates {
intermediatePool.AddCert(trustedIntermediateCert)
}
pki.caIntermediateCertificates = intermediatePool
}
if err := pki.validate(); err != nil {
return nil, err
}
return &pki, nil
}
// sigstoreSignedTrustRoot contains an already parsed version of the prSigstoreSigned policy // sigstoreSignedTrustRoot contains an already parsed version of the prSigstoreSigned policy
type sigstoreSignedTrustRoot struct { type sigstoreSignedTrustRoot struct {
publicKeys []crypto.PublicKey publicKeys []crypto.PublicKey
fulcio *fulcioTrustRoot fulcio *fulcioTrustRoot
rekorPublicKeys []*ecdsa.PublicKey rekorPublicKeys []*ecdsa.PublicKey
pki *pkiTrustRoot
} }
func (pr *prSigstoreSigned) prepareTrustRoot() (*sigstoreSignedTrustRoot, error) { func (pr *prSigstoreSigned) prepareTrustRoot() (*sigstoreSignedTrustRoot, error) {
@ -166,6 +219,14 @@ func (pr *prSigstoreSigned) prepareTrustRoot() (*sigstoreSignedTrustRoot, error)
} }
} }
if pr.PKI != nil {
p, err := pr.PKI.prepareTrustRoot()
if err != nil {
return nil, err
}
res.pki = p
}
return &res, nil return &res, nil
} }
@ -189,13 +250,23 @@ func (pr *prSigstoreSigned) isSignatureAccepted(ctx context.Context, image priva
} }
untrustedPayload := sig.UntrustedPayload() untrustedPayload := sig.UntrustedPayload()
keySources := 0
if trustRoot.publicKeys != nil {
keySources++
}
if trustRoot.fulcio != nil {
keySources++
}
if trustRoot.pki != nil {
keySources++
}
var publicKeys []crypto.PublicKey var publicKeys []crypto.PublicKey
switch { switch {
case trustRoot.publicKeys != nil && trustRoot.fulcio != nil: // newPRSigstoreSigned rejects such combinations. case keySources > 1: // newPRSigstoreSigned rejects more than one key sources.
return sarRejected, errors.New("Internal inconsistency: Both a public key and Fulcio CA specified") return sarRejected, errors.New("Internal inconsistency: More than one of public key, Fulcio, or PKI specified")
case trustRoot.publicKeys == nil && trustRoot.fulcio == nil: // newPRSigstoreSigned rejects such combinations. case keySources == 0: // newPRSigstoreSigned rejects empty key sources.
return sarRejected, errors.New("Internal inconsistency: Neither a public key nor a Fulcio CA specified") return sarRejected, errors.New("Internal inconsistency: A public key, Fulcio, or PKI must be specified.")
case trustRoot.publicKeys != nil: case trustRoot.publicKeys != nil:
if trustRoot.rekorPublicKeys != nil { if trustRoot.rekorPublicKeys != nil {
untrustedSET, ok := untrustedAnnotations[signature.SigstoreSETAnnotationKey] untrustedSET, ok := untrustedAnnotations[signature.SigstoreSETAnnotationKey]
@ -254,6 +325,24 @@ func (pr *prSigstoreSigned) isSignatureAccepted(ctx context.Context, image priva
return sarRejected, err return sarRejected, err
} }
publicKeys = []crypto.PublicKey{pk} publicKeys = []crypto.PublicKey{pk}
case trustRoot.pki != nil:
if trustRoot.rekorPublicKeys != nil { // newPRSigstoreSigned rejects such combinations.
return sarRejected, errors.New("Internal inconsistency: PKI specified with a Rekor public key")
}
untrustedCert, ok := untrustedAnnotations[signature.SigstoreCertificateAnnotationKey]
if !ok {
return sarRejected, fmt.Errorf("missing %s annotation", signature.SigstoreCertificateAnnotationKey)
}
var untrustedIntermediateChainBytes []byte
if untrustedIntermediateChain, ok := untrustedAnnotations[signature.SigstoreIntermediateCertificateChainAnnotationKey]; ok {
untrustedIntermediateChainBytes = []byte(untrustedIntermediateChain)
}
pk, err := verifyPKI(trustRoot.pki, []byte(untrustedCert), untrustedIntermediateChainBytes)
if err != nil {
return sarRejected, err
}
publicKeys = []crypto.PublicKey{pk}
} }
if len(publicKeys) == 0 { if len(publicKeys) == 0 {

View file

@ -111,16 +111,16 @@ type prSignedBaseLayer struct {
type prSigstoreSigned struct { type prSigstoreSigned struct {
prCommon prCommon
// KeyPath is a pathname to a local file containing the trusted key. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas and Fulcio must be specified. // KeyPath is a pathname to a local file containing the trusted key. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas, Fulcio, and PKI must be specified.
KeyPath string `json:"keyPath,omitempty"` KeyPath string `json:"keyPath,omitempty"`
// KeyPaths is a set of pathnames to local files containing the trusted key(s). Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas and Fulcio must be specified. // KeyPaths is a set of pathnames to local files containing the trusted key(s). Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas, Fulcio, and PKI must be specified.
KeyPaths []string `json:"keyPaths,omitempty"` KeyPaths []string `json:"keyPaths,omitempty"`
// KeyData contains the trusted key, base64-encoded. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas and Fulcio must be specified. // KeyData contains the trusted key, base64-encoded. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas, Fulcio, and PKI must be specified.
KeyData []byte `json:"keyData,omitempty"` KeyData []byte `json:"keyData,omitempty"`
// KeyDatas is a set of trusted keys, base64-encoded. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas and Fulcio must be specified. // KeyDatas is a set of trusted keys, base64-encoded. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas, Fulcio, and PKI must be specified.
KeyDatas [][]byte `json:"keyDatas,omitempty"` KeyDatas [][]byte `json:"keyDatas,omitempty"`
// Fulcio specifies which Fulcio-generated certificates are accepted. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas and Fulcio must be specified. // Fulcio specifies which Fulcio-generated certificates are accepted. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas, Fulcio, and PKI must be specified.
// If Fulcio is specified, one of RekorPublicKeyPath or RekorPublicKeyData must be specified as well. // If Fulcio is specified, one of RekorPublicKeyPath or RekorPublicKeyData must be specified as well.
Fulcio PRSigstoreSignedFulcio `json:"fulcio,omitempty"` Fulcio PRSigstoreSignedFulcio `json:"fulcio,omitempty"`
@ -141,6 +141,9 @@ type prSigstoreSigned struct {
// otherwise it is optional (and Rekor inclusion is not required if a Rekor public key is not specified). // otherwise it is optional (and Rekor inclusion is not required if a Rekor public key is not specified).
RekorPublicKeyDatas [][]byte `json:"rekorPublicKeyDatas,omitempty"` RekorPublicKeyDatas [][]byte `json:"rekorPublicKeyDatas,omitempty"`
// PKI specifies which PKI-generated certificates are accepted. Exactly one of KeyPath, KeyPaths, KeyData, KeyDatas, Fulcio, and PKI must be specified.
PKI PRSigstoreSignedPKI `json:"pki,omitempty"`
// SignedIdentity specifies what image identity the signature must be claiming about the image. // SignedIdentity specifies what image identity the signature must be claiming about the image.
// Defaults to "matchRepoDigestOrExact" if not specified. // Defaults to "matchRepoDigestOrExact" if not specified.
// Note that /usr/bin/cosign interoperability might require using repo-only matching. // Note that /usr/bin/cosign interoperability might require using repo-only matching.
@ -167,6 +170,30 @@ type prSigstoreSignedFulcio struct {
SubjectEmail string `json:"subjectEmail,omitempty"` SubjectEmail string `json:"subjectEmail,omitempty"`
} }
// PRSigstoreSignedPKI contains PKI configuration options for a "sigstoreSigned" PolicyRequirement.
type PRSigstoreSignedPKI interface {
// prepareTrustRoot creates a pkiTrustRoot from the input data.
// (This also prevents external implementations of this interface, ensuring that prSigstoreSignedPKI is the only one.)
prepareTrustRoot() (*pkiTrustRoot, error)
}
// prSigstoreSignedPKI contains non-fulcio certificate PKI configuration options for prSigstoreSigned
type prSigstoreSignedPKI struct {
// CARootsPath a path to a file containing accepted CA root certificates, in PEM format. Exactly one of CARootsPath and CARootsData must be specified.
CARootsPath string `json:"caRootsPath"`
// CARootsData contains accepted CA root certificates in PEM format, all of that base64-encoded. Exactly one of CARootsPath and CARootsData must be specified.
CARootsData []byte `json:"caRootsData"`
// CAIntermediatesPath a path to a file containing accepted CA intermediate certificates, in PEM format. Only one of CAIntermediatesPath or CAIntermediatesData can be specified, not both.
CAIntermediatesPath string `json:"caIntermediatesPath"`
// CAIntermediatesData contains accepted CA intermediate certificates in PEM format, all of that base64-encoded. Only one of CAIntermediatesPath or CAIntermediatesData can be specified, not both.
CAIntermediatesData []byte `json:"caIntermediatesData"`
// SubjectEmail specifies the expected email address imposed on the subject to which the certificate was issued. At least one of SubjectEmail and SubjectHostname must be specified.
SubjectEmail string `json:"subjectEmail"`
// SubjectHostname specifies the expected hostname imposed on the subject to which the certificate was issued. At least one of SubjectEmail and SubjectHostname must be specified.
SubjectHostname string `json:"subjectHostname"`
}
// PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement. // PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement.
// The type is public, but its implementation is private. // The type is public, but its implementation is private.

View file

@ -14,8 +14,9 @@ import (
"github.com/containers/image/v5/internal/imagesource/impl" "github.com/containers/image/v5/internal/imagesource/impl"
"github.com/containers/image/v5/internal/imagesource/stubs" "github.com/containers/image/v5/internal/imagesource/stubs"
"github.com/containers/image/v5/pkg/compression"
compressionTypes "github.com/containers/image/v5/pkg/compression/types"
"github.com/containers/image/v5/types" "github.com/containers/image/v5/types"
"github.com/klauspost/pgzip"
digest "github.com/opencontainers/go-digest" digest "github.com/opencontainers/go-digest"
imgspecs "github.com/opencontainers/image-spec/specs-go" imgspecs "github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1" imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
@ -82,31 +83,47 @@ func (r *tarballReference) NewImageSource(ctx context.Context, sys *types.System
} }
} }
// Default to assuming the layer is compressed.
layerType := imgspecv1.MediaTypeImageLayerGzip
// Set up to digest the file as it is. // Set up to digest the file as it is.
blobIDdigester := digest.Canonical.Digester() blobIDdigester := digest.Canonical.Digester()
reader = io.TeeReader(reader, blobIDdigester.Hash()) reader = io.TeeReader(reader, blobIDdigester.Hash())
// Set up to digest the file after we maybe decompress it. var layerType string
diffIDdigester := digest.Canonical.Digester() var diffIDdigester digest.Digester
uncompressed, err := pgzip.NewReader(reader) // If necessary, digest the file after we decompress it.
if err == nil { if err := func() error { // A scope for defer
format, decompressor, reader, err := compression.DetectCompressionFormat(reader)
if err != nil {
return err
}
if decompressor != nil {
uncompressed, err := decompressor(reader)
if err != nil {
return err
}
defer uncompressed.Close()
// It is compressed, so the diffID is the digest of the uncompressed version // It is compressed, so the diffID is the digest of the uncompressed version
diffIDdigester = digest.Canonical.Digester()
reader = io.TeeReader(uncompressed, diffIDdigester.Hash()) reader = io.TeeReader(uncompressed, diffIDdigester.Hash())
switch format.Name() {
case compressionTypes.GzipAlgorithmName:
layerType = imgspecv1.MediaTypeImageLayerGzip
case compressionTypes.ZstdAlgorithmName:
layerType = imgspecv1.MediaTypeImageLayerZstd
default: // This is incorrect, but we have no good options, and it is what this transport was historically doing.
layerType = imgspecv1.MediaTypeImageLayerGzip
}
} else { } else {
// It is not compressed, so the diffID and the blobID are going to be the same // It is not compressed, so the diffID and the blobID are going to be the same
diffIDdigester = blobIDdigester diffIDdigester = blobIDdigester
layerType = imgspecv1.MediaTypeImageLayer layerType = imgspecv1.MediaTypeImageLayer
uncompressed = nil
} }
// TODO: This can take quite some time, and should ideally be cancellable using ctx.Done(). // TODO: This can take quite some time, and should ideally be cancellable using ctx.Done().
if _, err := io.Copy(io.Discard, reader); err != nil { if _, err := io.Copy(io.Discard, reader); err != nil {
return nil, fmt.Errorf("error reading %q: %w", filename, err) return fmt.Errorf("error reading %q: %w", filename, err)
} }
if uncompressed != nil { return nil
uncompressed.Close() }(); err != nil {
return nil, err
} }
// Grab our uncompressed and possibly-compressed digests and sizes. // Grab our uncompressed and possibly-compressed digests and sizes.

View file

@ -8,7 +8,7 @@ const (
// VersionMinor is for functionality in a backwards-compatible manner // VersionMinor is for functionality in a backwards-compatible manner
VersionMinor = 34 VersionMinor = 34
// VersionPatch is for backwards-compatible bug fixes // VersionPatch is for backwards-compatible bug fixes
VersionPatch = 0 VersionPatch = 3
// VersionDev indicates development branch. Releases will be empty string. // VersionDev indicates development branch. Releases will be empty string.
VersionDev = "" VersionDev = ""

View file

@ -35,7 +35,7 @@ TESTFLAGS := $(shell $(GO) test -race $(BUILDFLAGS) ./pkg/stringutils 2>&1 > /de
# N/B: This value is managed by Renovate, manual changes are # N/B: This value is managed by Renovate, manual changes are
# possible, as long as they don't disturb the formatting # possible, as long as they don't disturb the formatting
# (i.e. DO NOT ADD A 'v' prefix!) # (i.e. DO NOT ADD A 'v' prefix!)
GOLANGCI_LINT_VERSION := 1.63.4 GOLANGCI_LINT_VERSION := 1.64.5
default all: local-binary docs local-validate local-cross ## validate all checks, build and cross-build\nbinaries and docs default all: local-binary docs local-validate local-cross ## validate all checks, build and cross-build\nbinaries and docs

View file

@ -1 +1 @@
1.57.1 1.57.2

View file

@ -35,6 +35,7 @@ func CreateIDMappedMount(source, target string, pid int) error {
&unix.MountAttr{ &unix.MountAttr{
Attr_set: unix.MOUNT_ATTR_IDMAP, Attr_set: unix.MOUNT_ATTR_IDMAP,
Userns_fd: uint64(userNsFile.Fd()), Userns_fd: uint64(userNsFile.Fd()),
Propagation: unix.MS_PRIVATE,
}); err != nil { }); err != nil {
return &os.PathError{Op: "mount_setattr", Path: source, Err: err} return &os.PathError{Op: "mount_setattr", Path: source, Err: err}
} }

View file

@ -1 +1 @@
147 151

View file

@ -0,0 +1,66 @@
{
"aarch64": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/BaseOS/aarch64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/AppStream/aarch64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
],
"ppc64le": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/BaseOS/ppc64le/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/AppStream/ppc64le/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
],
"s390x": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/BaseOS/s390x/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/AppStream/s390x/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
],
"x86_64": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/BaseOS/x86_64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10.0/AppStream/x86_64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
]
}

View file

@ -0,0 +1,66 @@
{
"aarch64": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10/BaseOS/aarch64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10/AppStream/aarch64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
],
"ppc64le": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10/BaseOS/ppc64le/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10/AppStream/ppc64le/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
],
"s390x": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10/BaseOS/s390x/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10/AppStream/s390x/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
],
"x86_64": [
{
"name": "baseos",
"baseurl": "https://repo.almalinux.org/almalinux/10/BaseOS/x86_64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
},
{
"name": "appstream",
"baseurl": "https://repo.almalinux.org/almalinux/10/AppStream/x86_64/os/",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGaP6O8BEACvg8IlAxGayV8zOi9Ex+Pd8lrj2BrBzloG8ri84ORp9o8ojq7l\nykKmIElHe11cQD2Lf/a4lcQQ4Ec3baiD786X6K2eVSlBEAnZMzfjDg8R63SfsBuu\n8Yk+lUyqlBrDnSDYaPruOAzLIz2r82ikIC1jDbipZsMFPFHPI4/hayyWxJ3oGxRe\n0mbtYLB9ElEKngt+/hfo7JLklakbznyIRuVEF3VrZb91XC6r/idqfJoNyBXSKidj\nz0IwqOhgkLUk84rzltDo3AzwGqusd7PEuhOmqinOhp0hMdXsztD4TVyhw82iXu/O\nonOAObZTZYfM6Z8btmDqkoo0aT+oPPCuZ3yC/caU9dhvCSXET/CGoXc3hL55u9PV\nqmcVm/mwvuEImEAvxVc0/dBzEUk+FwW8KsaN3HoUKrC4/NqgmaQz8/42np7u2j+B\nOOJ4hAckNEdWd8rB86CYN00sdxnvLBsp8V3IwEqXLhGOoBsagy61Z8hKCM+siOGn\nxmbbybgaLOs+DPlxt9LrtgLJHODwmD96oysUPJuA0lv8KMiSpId0tSpp9Wn/wHBG\nkRgxGYfzQu7WRvRZqQaleft1JTXXOjNzPur0RkJyb3yFwAoxpePyo/WrupM41OHW\n58cEqdC6riCnJcS4U84RLj+hwvufBVB7areQ75sETnKeyozZW+P16E1t/wARAQAB\ntChBbG1hTGludXggT1MgMTAgPHBhY2thZ2VyQGFsbWFsaW51eC5vcmc+iQJMBBMB\nCgA2FiEE7m23uY9b9e3Z2g3l3uXBHMKh5XIFAmaP6O8CGwMECwkIBwQVCgkIBRYC\nAwEAAh4FAheAAAoJEN7lwRzCoeVy32AP/A2+KI+JhmsxnactSptkAWGyAAf1YBWW\nJs2sc9OJdKj7uIkzszCx7c7VIVeF/VLijIYpM/zwUgir5S5SimzQmY+FumwbKIml\nK5RBsoSog22i7Edho0MLa1pa6qvnKS0nkl9DEcu8EbMUhucWbxGnCG/22EEMTrY+\nSi1IZNkDGtlBHHBKMC+STbqqTxtdy4tAd2NYwWh3sBIh6PF7T4NLRAugu7PZQr5K\namS4z2lV3ebshGjieA0Zoznwh0AXgN0gZ/0pC/LXI25gcgtrvkCyL8Fe0AyZUMd8\nUvZXaRSsm3SkCUIlGjPrvuItn1D7tHmqVSCDKXDM2TqjfiRm1JF+2OFCBNvGz19V\nLxWd/Gf+0qw0dtKxRMKzGh0mxXY40hjtmYZulrPxhG5itNDjStovgrevM1HBsXs9\nikrkOGQ0pFcqizTn4ZKAmMozEMuIuV89Vof2bBCg7pHT1FmXVdAaYJxb6a7A/CgN\nqHjoh8AxBiGw/Q2NM4YJlUVhHqqd+/lUG3WJqACNEnqSlZkYQ3HqNNaKhHVbD4mN\nq/g6v+f8aWWDZDsI6IAfbJUB+KPEnIvQJQleWuHrq7kcUMhEq3dwBMIoTVEHhUUr\nRQKToSEM1rN7PcanaXQM2gy141dS7tFLxhapG8ug75LkIUnEOpPMtUjvrU1ZELGq\n36vVHBB+dTDg\n=tJCw\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true,
"rhsm": false
}
]
}

View file

@ -2,11 +2,13 @@ package common
import ( import (
"bytes" "bytes"
"encoding/binary"
"fmt" "fmt"
"io" "io"
"os/exec" "os/exec"
"sort" "sort"
"strings" "strings"
"unicode/utf16"
) )
func PanicOnError(err error) { func PanicOnError(err error) {
@ -68,3 +70,15 @@ func Must[T any](val T, err error) T {
} }
return val return val
} }
// EncodeUTF16le encodes a source string to UTF-16LE.
func EncodeUTF16le(src string) []byte {
runes := []rune(src)
u16data := utf16.Encode(runes)
dest := make([]byte, 0, len(u16data)*2)
for _, c := range u16data {
dest = binary.LittleEndian.AppendUint16(dest, c)
}
return dest
}

View file

@ -1,6 +1,7 @@
package datasizes package datasizes
import ( import (
"encoding/json"
"fmt" "fmt"
"regexp" "regexp"
"strconv" "strconv"
@ -49,8 +50,31 @@ func Parse(size string) (uint64, error) {
} }
} }
// In case the strign didn't match any of the above regexes, return nil // In case the string didn't match any of the above regexes, return nil
// even if a number was found. This is to prevent users from submitting // even if a number was found. This is to prevent users from submitting
// unknown units. // unknown units.
return 0, fmt.Errorf("unknown data size units in string: %s", size) return 0, fmt.Errorf("unknown data size units in string: %s", size)
} }
// ParseSizeInJSONMapping will process the given JSON data, assuming it
// contains a mapping. It will convert the value of the given field to a size
// in bytes using the Parse function if the field exists and is a string.
func ParseSizeInJSONMapping(field string, data []byte) ([]byte, error) {
var mapping map[string]any
if err := json.Unmarshal(data, &mapping); err != nil {
return nil, fmt.Errorf("failed to unmarshal JSON data: %w", err)
}
if rawSize, ok := mapping[field]; ok {
// If the size is a string, parse it and replace the value in the map
if sizeStr, ok := rawSize.(string); ok {
size, err := Parse(sizeStr)
if err != nil {
return nil, fmt.Errorf("failed to parse size field named %q to bytes: %w", field, err)
}
mapping[field] = size
}
}
return json.Marshal(mapping)
}

View file

@ -6,6 +6,8 @@ import (
"reflect" "reflect"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/datasizes"
) )
const DefaultBtrfsCompression = "zstd:1" const DefaultBtrfsCompression = "zstd:1"
@ -118,6 +120,25 @@ type BtrfsSubvolume struct {
UUID string `json:"uuid,omitempty" yaml:"uuid,omitempty"` UUID string `json:"uuid,omitempty" yaml:"uuid,omitempty"`
} }
func (sv *BtrfsSubvolume) UnmarshalJSON(data []byte) (err error) {
data, err = datasizes.ParseSizeInJSONMapping("size", data)
if err != nil {
return fmt.Errorf("error parsing size in btrfs subvolume: %w", err)
}
type aliasStruct BtrfsSubvolume
var alias aliasStruct
if err := jsonUnmarshalStrict(data, &alias); err != nil {
return fmt.Errorf("cannot unmarshal %q: %w", data, err)
}
*sv = BtrfsSubvolume(alias)
return err
}
func (sv *BtrfsSubvolume) UnmarshalYAML(unmarshal func(any) error) error {
return common.UnmarshalYAMLviaJSON(sv, unmarshal)
}
func (bs *BtrfsSubvolume) Clone() Entity { func (bs *BtrfsSubvolume) Clone() Entity {
if bs == nil { if bs == nil {
return nil return nil

View file

@ -206,6 +206,24 @@ func (f FSType) String() string {
} }
} }
func (f *FSType) UnmarshalJSON(data []byte) error {
var s string
if err := json.Unmarshal(data, &s); err != nil {
return err
}
new, err := NewFSType(s)
if err != nil {
return err
}
*f = new
return nil
}
func (f *FSType) UnmarshalYAML(unmarshal func(any) error) error {
return common.UnmarshalYAMLviaJSON(f, unmarshal)
}
func NewFSType(s string) (FSType, error) { func NewFSType(s string) (FSType, error) {
switch s { switch s {
case "": case "":

View file

@ -246,6 +246,11 @@ func lvname(path string) string {
} }
func (lv *LVMLogicalVolume) UnmarshalJSON(data []byte) (err error) { func (lv *LVMLogicalVolume) UnmarshalJSON(data []byte) (err error) {
data, err = datasizes.ParseSizeInJSONMapping("size", data)
if err != nil {
return fmt.Errorf("error parsing size in LVM LV: %w", err)
}
// keep in sync with lvm.go,partition.go,luks.go // keep in sync with lvm.go,partition.go,luks.go
type alias LVMLogicalVolume type alias LVMLogicalVolume
var withoutPayload struct { var withoutPayload struct {

View file

@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"github.com/osbuild/images/internal/common" "github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/datasizes"
) )
type Partition struct { type Partition struct {
@ -126,6 +127,11 @@ func (p *Partition) MarshalJSON() ([]byte, error) {
} }
func (p *Partition) UnmarshalJSON(data []byte) (err error) { func (p *Partition) UnmarshalJSON(data []byte) (err error) {
data, err = datasizes.ParseSizeInJSONMapping("size", data)
if err != nil {
return fmt.Errorf("error parsing size in partition: %w", err)
}
// keep in sync with lvm.go,partition.go,luks.go // keep in sync with lvm.go,partition.go,luks.go
type alias Partition type alias Partition
var withoutPayload struct { var withoutPayload struct {

View file

@ -7,6 +7,7 @@ import (
"github.com/google/uuid" "github.com/google/uuid"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/arch" "github.com/osbuild/images/pkg/arch"
"github.com/osbuild/images/pkg/blueprint" "github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/datasizes" "github.com/osbuild/images/pkg/datasizes"
@ -26,7 +27,7 @@ type PartitionTable struct {
SectorSize uint64 `json:"sector_size,omitempty" yaml:"sector_size,omitempty"` SectorSize uint64 `json:"sector_size,omitempty" yaml:"sector_size,omitempty"`
// Extra space at the end of the partition table (sectors) // Extra space at the end of the partition table (sectors)
ExtraPadding uint64 `json:"extra_padding,omitempty" yaml:"extra_padding,omitempty"` ExtraPadding uint64 `json:"extra_padding,omitempty" yaml:"extra_padding,omitempty"`
// Starting offset of the first partition in the table (Mb) // Starting offset of the first partition in the table (in bytes)
StartOffset uint64 `json:"start_offset,omitempty" yaml:"start_offset,omitempty"` StartOffset uint64 `json:"start_offset,omitempty" yaml:"start_offset,omitempty"`
} }
@ -172,6 +173,27 @@ func NewPartitionTable(basePT *PartitionTable, mountpoints []blueprint.Filesyste
return newPT, nil return newPT, nil
} }
func (pt *PartitionTable) UnmarshalJSON(data []byte) (err error) {
for _, field := range []string{"size", "start_offset"} {
data, err = datasizes.ParseSizeInJSONMapping(field, data)
if err != nil {
return fmt.Errorf("error parsing %q in partition table: %w", field, err)
}
}
type aliasStruct PartitionTable
var alias aliasStruct
if err := jsonUnmarshalStrict(data, &alias); err != nil {
return fmt.Errorf("cannot unmarshal %q: %w", data, err)
}
*pt = PartitionTable(alias)
return err
}
func (pt *PartitionTable) UnmarshalYAML(unmarshal func(any) error) error {
return common.UnmarshalYAMLviaJSON(pt, unmarshal)
}
func (pt *PartitionTable) Clone() Entity { func (pt *PartitionTable) Clone() Entity {
if pt == nil { if pt == nil {
return nil return nil

View file

@ -0,0 +1,61 @@
distros:
- &fedora_rawhide
name: fedora-43
preview: true
os_version: 43
release_version: 43
module_platform_id: platform:f43
product: "Fedora"
ostree_ref_tmpl: "fedora/43/%s/iot"
iso_label_tmpl: "{{.Product}}-{{.OsVersion}}-{{.ImgTypeLabel}}-{{.Arch}}"
default_fs_type: "ext4"
defs_path: fedora
runner: &fedora_runner
name: org.osbuild.fedora43
build_packages:
- "glibc" # ldconfig
- "systemd" # systemd-tmpfiles and systemd-sysusers
- "python3" # osbuild
oscap_profiles_allowlist:
- "xccdf_org.ssgproject.content_profile_ospp"
- "xccdf_org.ssgproject.content_profile_pci-dss"
- "xccdf_org.ssgproject.content_profile_standard"
bootstrap_containers:
x86_64: "registry.fedoraproject.org/fedora-toolbox:43"
aarch64: "registry.fedoraproject.org/fedora-toolbox:43"
ppc64le: "registry.fedoraproject.org/fedora-toolbox:43"
s390x: "registry.fedoraproject.org/fedora-toolbox:43"
# XXX: remove once fedora containers are part of the upstream
# fedora registry (and can be validated via tls)
riscv64: "ghcr.io/mvo5/fedora-buildroot:43"
# XXX: add repos here too, that requires some churn, see
# https://github.com/osbuild/images/compare/main...mvo5:yaml-distroconfig?expand=1
# and we will also need to think about backward compat, as currently
# dropping "$distro-$ver.json" files into
# /etc/osbuild-composer/repositories will define what distros are
# available via images and we will need to provide compatibility for
# that.
#
# Having the repos separated means when a new fedora release is out
# we will need to update two places which is clearly a regression from
# before.
- &fedora_stable
<<: *fedora_rawhide
name: "fedora-{{.MajorVersion}}"
match: "fedora-[0-9][0-9]{,[0-9]}"
preview: false
os_version: "{{.MajorVersion}}"
release_version: "{{.MajorVersion}}"
module_platform_id: "platform:f{{.MajorVersion}}"
ostree_ref_tmpl: "fedora/{{.MajorVersion}}/%s/iot"
runner:
<<: *fedora_runner
name: org.osbuild.fedora{{.MajorVersion}}
bootstrap_containers:
x86_64: "registry.fedoraproject.org/fedora-toolbox:{{.MajorVersion}}"
aarch64: "registry.fedoraproject.org/fedora-toolbox:{{.MajorVersion}}"
ppc64le: "registry.fedoraproject.org/fedora-toolbox:{{.MajorVersion}}"
s390x: "registry.fedoraproject.org/fedora-toolbox:{{.MajorVersion}}"
# XXX: remove once fedora containers are part of the upstream
# fedora registry (and can be validated via tls)
riscv64: "ghcr.io/mvo5/fedora-buildroot:{{.MajorVersion}}"

View file

@ -66,6 +66,7 @@
- "efibootmgr" - "efibootmgr"
- "grub2-efi-x64" - "grub2-efi-x64"
- "shim-x64" - "shim-x64"
bootloader: "grub2"
x86_64_bios_platform: &x86_64_bios_platform x86_64_bios_platform: &x86_64_bios_platform
<<: *x86_64_uefi_platform <<: *x86_64_uefi_platform
bios_platform: "i386-pc" bios_platform: "i386-pc"
@ -77,6 +78,7 @@
build_packages: build_packages:
bios: bios:
- "grub2-pc" - "grub2-pc"
bootloader: "grub2"
# XXX: the name is not 100% accurate, this platform is also used for iot-container, iot-commit # XXX: the name is not 100% accurate, this platform is also used for iot-container, iot-commit
x86_64_installer_platform: &x86_64_installer_platform x86_64_installer_platform: &x86_64_installer_platform
<<: *x86_64_bios_platform <<: *x86_64_bios_platform
@ -88,6 +90,7 @@
- "iwlwifi-dvm-firmware" - "iwlwifi-dvm-firmware"
- "iwlwifi-mvm-firmware" - "iwlwifi-mvm-firmware"
- "microcode_ctl" - "microcode_ctl"
bootloader: "grub2"
aarch64_platform: &aarch64_platform aarch64_platform: &aarch64_platform
arch: "aarch64" arch: "aarch64"
uefi_vendor: "fedora" uefi_vendor: "fedora"
@ -100,6 +103,7 @@
- "grub2-efi-aa64" - "grub2-efi-aa64"
- "grub2-tools" - "grub2-tools"
- "shim-aa64" - "shim-aa64"
bootloader: "grub2"
aarch64_installer_platform: &aarch64_installer_platform aarch64_installer_platform: &aarch64_installer_platform
arch: "aarch64" arch: "aarch64"
uefi_vendor: "fedora" uefi_vendor: "fedora"
@ -112,6 +116,7 @@
- "iwlwifi-mvm-firmware" - "iwlwifi-mvm-firmware"
- "realtek-firmware" - "realtek-firmware"
- "uboot-images-armv8" - "uboot-images-armv8"
bootloader: "grub2"
ppc64le_bios_platform: &ppc64le_bios_platform ppc64le_bios_platform: &ppc64le_bios_platform
arch: "ppc64le" arch: "ppc64le"
bios_platform: "powerpc-ieee1275" bios_platform: "powerpc-ieee1275"
@ -127,6 +132,7 @@
bios: bios:
- "grub2-ppc64le" - "grub2-ppc64le"
- "grub2-ppc64le-modules" - "grub2-ppc64le-modules"
bootloader: "grub2"
s390x_zipl_platform: &s390x_zipl_platform s390x_zipl_platform: &s390x_zipl_platform
arch: "s390x" arch: "s390x"
zipl_support: true zipl_support: true
@ -140,6 +146,7 @@
build_packages: build_packages:
zipl: zipl:
- "s390utils-base" - "s390utils-base"
bootloader: "zipl"
riscv64_uefi_platform: &riscv64_uefi_platform riscv64_uefi_platform: &riscv64_uefi_platform
arch: "riscv64" arch: "riscv64"
uefi_vendor: "uefi" uefi_vendor: "uefi"
@ -153,15 +160,25 @@
- "grub2-efi-riscv64" - "grub2-efi-riscv64"
- "grub2-efi-riscv64-modules" - "grub2-efi-riscv64-modules"
- "shim-unsigned-riscv64" - "shim-unsigned-riscv64"
bootloader: "grub2"
installer_config: &default_installer_config installer_config: &default_installer_config
additional_dracut_modules: additional_dracut_modules:
- "net-lib" - "net-lib"
squashfs_rootfs: true
condition: condition:
version_less_than: version_less_than:
"42": "42":
# config is fully replaced
additional_dracut_modules: additional_dracut_modules:
- "ifcfg" - "ifcfg"
squashfs_rootfs: true
"41":
# config is fully replaced
additional_dracut_modules:
- "ifcfg"
squashfs_rootfs: false
image_config: image_config:
iot_enabled_services: &image_config_iot_enabled_services iot_enabled_services: &image_config_iot_enabled_services
@ -237,12 +254,12 @@
# the invidual partitions for easier composibility # the invidual partitions for easier composibility
partitions: partitions:
- &default_partition_table_part_bios - &default_partition_table_part_bios
size: 1_048_576 # 1 MiB size: "1 MiB"
bootable: true bootable: true
type: *bios_boot_partition_guid type: *bios_boot_partition_guid
uuid: *bios_boot_partition_uuid uuid: *bios_boot_partition_uuid
- &default_partition_table_part_efi - &default_partition_table_part_efi
size: 209_715_200 # 200 MiB size: "200 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -255,7 +272,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &default_partition_table_part_boot - &default_partition_table_part_boot
size: 1_073_741_824 # 1 * datasizes.GibiByte, size: "1 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -267,7 +284,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 0 fstab_passno: 0
- &default_partition_table_part_root - &default_partition_table_part_root
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -280,7 +297,7 @@
fstab_passno: 0 fstab_passno: 0
# iot partitions # iot partitions
- &iot_base_partition_table_part_efi - &iot_base_partition_table_part_efi
size: 525_336_576 # 501 * datasizes.MebiByte size: "501 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -293,7 +310,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &iot_base_partition_table_part_boot - &iot_base_partition_table_part_boot
size: 1_073_741_824 # 1 * datasizes.GibiByte, size: "1 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -305,7 +322,7 @@
fstab_freq: 1 fstab_freq: 1
fstab_passno: 2 fstab_passno: 2
- &iot_base_partition_table_part_root - &iot_base_partition_table_part_root
size: 2_693_791_744 # 2569 * datasizes.MebiByte, size: "2569 MiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -362,11 +379,11 @@
uuid: "0x14fc63d2" uuid: "0x14fc63d2"
type: "dos" type: "dos"
partitions: partitions:
- size: 4_194_304 # 4 MiB - size: "4 MiB"
bootable: true bootable: true
type: *prep_partition_dosid type: *prep_partition_dosid
- &default_partition_table_part_boot_ppc64le - &default_partition_table_part_boot_ppc64le
size: 1_073_741_824 # 1 * datasizes.GibiByte, size: "1 GiB"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
type: "ext4" type: "ext4"
@ -376,7 +393,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 0 fstab_passno: 0
- &default_partition_table_part_root_ppc64le - &default_partition_table_part_root_ppc64le
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
type: "ext4" type: "ext4"
@ -397,7 +414,7 @@
x86_64: x86_64:
uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0" uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0"
type: "gpt" type: "gpt"
start_offset: 8_388_608 # 8 * datasizes.MebiByte start_offset: "8 MiB"
partitions: partitions:
- *default_partition_table_part_efi - *default_partition_table_part_efi
- &minimal_raw_partition_table_part_boot - &minimal_raw_partition_table_part_boot
@ -408,7 +425,7 @@
aarch64: &minimal_raw_partition_table_aarch64 aarch64: &minimal_raw_partition_table_aarch64
uuid: "0xc1748067" uuid: "0xc1748067"
type: "dos" type: "dos"
start_offset: 8_388_608 # 8 * datasizes.MebiByte start_offset: "8 MiB"
partitions: partitions:
- <<: *default_partition_table_part_efi - <<: *default_partition_table_part_efi
bootable: true bootable: true
@ -426,7 +443,7 @@
x86_64: &iot_base_partition_table_x86_64 x86_64: &iot_base_partition_table_x86_64
uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0" uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0"
type: "gpt" type: "gpt"
start_offset: 8_388_608 # 8 * datasizes.MebiByte start_offset: "8 MiB"
partitions: partitions:
- *iot_base_partition_table_part_efi - *iot_base_partition_table_part_efi
- *iot_base_partition_table_part_boot - *iot_base_partition_table_part_boot
@ -434,7 +451,7 @@
aarch64: &iot_base_partition_table_aarch64 aarch64: &iot_base_partition_table_aarch64
uuid: "0xc1748067" uuid: "0xc1748067"
type: "dos" type: "dos"
start_offset: 8_388_608 # 8 * datasizes.MebiByte start_offset: "8 MiB"
partitions: partitions:
- *iot_base_partition_table_part_efi_aarch64 - *iot_base_partition_table_part_efi_aarch64
- *iot_base_partition_table_part_boot_aarch64 - *iot_base_partition_table_part_boot_aarch64
@ -446,7 +463,7 @@
type: "gpt" type: "gpt"
partitions: partitions:
- *iot_base_partition_table_part_efi - *iot_base_partition_table_part_efi
- size: 1_073_741_824 # 1 * datasizes.GibiByte, - size: "1 GiB"
type: *xboot_ldr_partition_guid type: *xboot_ldr_partition_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -477,7 +494,7 @@
name: "rootvg" name: "rootvg"
description: "built with lvm2 and osbuild" description: "built with lvm2 and osbuild"
logical_volumes: logical_volumes:
- size: 8_589_934_592 # 8 * datasizes.GibiByte, - size: "8 GiB"
name: "rootlv" name: "rootlv"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
@ -500,6 +517,48 @@ image_config:
timezone: "UTC" timezone: "UTC"
image_types: image_types:
"server-vagrant-libvirt": &server_vagrant_libvirt
filename: "vagrant-libvirt.box"
mime_type: "application/x-tar"
environment: *kvm_env
bootable: true
default_size: 5_368_709_120 # 5 * datasizes.GibiByte
image_func: "disk"
build_pipelines: ["build"]
payload_pipelines: ["os", "image", "vagrant", "archive"]
exports: ["archive"]
required_partition_sizes: *default_required_dir_sizes
image_config: &image_config_vagrant
default_target: "multi-user.target"
kernel_options: *cloud_kernel_options
users:
# yamllint disable rule:line-length
- name: "vagrant"
# yamllint disable rule:line-length
key: |
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN1YdxBpNlzxDqfJyw/QKow1F+wvG9hXGoqiysfJOn5Y vagrant insecure public key
# yamllint enable rule:line-length
files:
- path: "/etc/sudoers.d/vagrant"
user: "root"
group: "root"
mode: 440
data: |
vagrant ALL=(ALL) NOPASSWD: ALL
partition_table:
<<: *default_partition_tables
package_sets:
os:
- *cloud_base_pkgset
- include:
- "qemu-guest-agent"
platforms:
- <<: *x86_64_bios_platform
image_format: "vagrant_libvirt"
- <<: *aarch64_platform
image_format: "vagrant_libvirt"
"server-qcow2": &server_qcow2 "server-qcow2": &server_qcow2
name_aliases: ["qcow2"] name_aliases: ["qcow2"]
filename: "disk.qcow2" filename: "disk.qcow2"
@ -811,6 +870,9 @@ image_types:
rpm_ostree: true rpm_ostree: true
bootable: true bootable: true
image_func: "iot" image_func: "iot"
ostree:
name: "fedora"
remote: "fedora-iot"
build_pipelines: ["build"] build_pipelines: ["build"]
payload_pipelines: ["ostree-deployment", "image", "xz"] payload_pipelines: ["ostree-deployment", "image", "xz"]
exports: ["xz"] exports: ["xz"]
@ -885,6 +947,9 @@ image_types:
rpm_ostree: true rpm_ostree: true
bootable: true bootable: true
image_func: "iot" image_func: "iot"
ostree:
name: "fedora"
remote: "fedora-iot"
build_pipelines: ["build"] build_pipelines: ["build"]
payload_pipelines: ["ostree-deployment", "image", "qcow2"] payload_pipelines: ["ostree-deployment", "image", "qcow2"]
exports: ["qcow2"] exports: ["qcow2"]
@ -1069,6 +1134,7 @@ image_types:
- "uboot-images-armv8" - "uboot-images-armv8"
boot_files: boot_files:
- ["/usr/share/uboot/rpi_arm64/u-boot.bin", "/boot/efi/rpi-u-boot.bin"] - ["/usr/share/uboot/rpi_arm64/u-boot.bin", "/boot/efi/rpi-u-boot.bin"]
bootloader: "grub2"
- *riscv64_uefi_platform - *riscv64_uefi_platform
image_config: image_config:
# NOTE: temporary workaround for a bug in initial-setup that # NOTE: temporary workaround for a bug in initial-setup that
@ -1328,6 +1394,9 @@ image_types:
boot_iso: true boot_iso: true
image_func: "iot_installer" image_func: "iot_installer"
iso_label: "IoT" iso_label: "IoT"
ostree:
name: "fedora-iot"
remote: "fedora-iot"
build_pipelines: ["build"] build_pipelines: ["build"]
payload_pipelines: payload_pipelines:
- "anaconda-tree" - "anaconda-tree"
@ -1407,7 +1476,9 @@ image_types:
- "sdubby" - "sdubby"
condition: condition:
version_greater_or_equal: version_greater_or_equal:
VERSION_RAWHIDE: # XXX: this was VERSION_RAWHIDE, if we need this again lets add
# "alias" to defs.DistroYAML
43:
include: include:
- "anaconda-webui" - "anaconda-webui"
platforms: platforms:
@ -1424,6 +1495,8 @@ image_types:
image_func: "image_installer" image_func: "image_installer"
# We don't know the variant of the OS pipeline being installed # We don't know the variant of the OS pipeline being installed
iso_label: "Unknown" iso_label: "Unknown"
# We don't know the variant that goes into the OS pipeline that gets installed
variant: "Unknown"
build_pipelines: ["build"] build_pipelines: ["build"]
payload_pipelines: payload_pipelines:
- "anaconda-tree" - "anaconda-tree"
@ -1433,6 +1506,22 @@ image_types:
- "bootiso" - "bootiso"
exports: ["bootiso"] exports: ["bootiso"]
required_partition_sizes: *default_required_dir_sizes required_partition_sizes: *default_required_dir_sizes
installer_config:
additional_dracut_modules:
- "net-lib"
- "dbus-broker"
squashfs_rootfs: true
condition:
# on match the config is fully replaced
version_less_than:
"41":
additional_dracut_modules: &additional_dracut_f41
- "ifcfg"
- "dbus-broker"
squashfs_rootfs: false
"42":
additional_dracut_modules: *additional_dracut_f41
squashfs_rootfs: true
image_config: image_config:
locale: "en_US.UTF-8" locale: "en_US.UTF-8"
iso_rootfs_type: "squashfs" iso_rootfs_type: "squashfs"
@ -1605,6 +1694,9 @@ image_types:
default_size: 10_737_418_240 # 10 * datasizes.GibiByte default_size: 10_737_418_240 # 10 * datasizes.GibiByte
image_func: "iot_simplified_installer" image_func: "iot_simplified_installer"
iso_label: "IoT" iso_label: "IoT"
ostree:
name: "fedora"
remote: "fedora-iot"
build_pipelines: ["build"] build_pipelines: ["build"]
payload_pipelines: payload_pipelines:
- "ostree-deployment" - "ostree-deployment"
@ -1616,7 +1708,8 @@ image_types:
- "bootiso" - "bootiso"
exports: ["bootiso"] exports: ["bootiso"]
required_partition_sizes: *default_required_dir_sizes required_partition_sizes: *default_required_dir_sizes
installer_config: *default_installer_config installer_config:
<<: *default_installer_config
image_config: image_config:
<<: *image_config_iot <<: *image_config_iot
ignition_platform: "metal" ignition_platform: "metal"

View file

@ -2,28 +2,36 @@
package defs package defs
import ( import (
"bytes"
"crypto/sha256"
"embed" "embed"
"errors" "errors"
"fmt" "fmt"
"io"
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
"slices" "slices"
"sort" "sort"
"strings" "sync"
"text/template"
"github.com/gobwas/glob"
"github.com/hashicorp/go-version" "github.com/hashicorp/go-version"
"golang.org/x/exp/maps" "golang.org/x/exp/maps"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
"github.com/osbuild/images/internal/common" "github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment" "github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/pkg/arch"
"github.com/osbuild/images/pkg/customizations/oscap"
"github.com/osbuild/images/pkg/disk" "github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro" "github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/experimentalflags" "github.com/osbuild/images/pkg/experimentalflags"
"github.com/osbuild/images/pkg/olog" "github.com/osbuild/images/pkg/olog"
"github.com/osbuild/images/pkg/platform" "github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd" "github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/runner"
) )
var ( var (
@ -32,12 +40,146 @@ var (
ErrNoPartitionTableForArch = errors.New("no partition table for arch") ErrNoPartitionTableForArch = errors.New("no partition table for arch")
) )
//go:embed */*.yaml //go:embed *.yaml */*.yaml
var data embed.FS var data embed.FS
var DataFS fs.FS = data var defaultDataFS fs.FS = data
type toplevelYAML struct { // distrosYAML defines all supported YAML based distributions
type distrosYAML struct {
Distros []DistroYAML
}
func dataFS() fs.FS {
// XXX: this is a short term measure, pass a set of
// searchPaths down the stack instead
var dataFS fs.FS = defaultDataFS
if overrideDir := experimentalflags.String("yamldir"); overrideDir != "" {
olog.Printf("WARNING: using experimental override dir %q", overrideDir)
dataFS = os.DirFS(overrideDir)
}
return dataFS
}
type DistroYAML struct {
// Match can be used to match multiple versions via a
// fnmatch/glob style expression. We could also use a
// regex and do something like:
// rhel-(?P<major>[0-9]+)\.(?P<minor>[0-9]+)
// if we need to be more precise in the future, but for
// now every match will be split into "$distroname-$major.$minor"
// (with minor being optional)
Match string `yaml:"match"`
// The distro metadata, can contain go text template strings
// for {{.Major}}, {{.Minor}} which will be expanded by the
// upper layers.
Name string `yaml:"name"`
Codename string `yaml:"codename"`
Vendor string `yaml:"vendor"`
Preview bool `yaml:"preview"`
OsVersion string `yaml:"os_version"`
ReleaseVersion string `yaml:"release_version"`
ModulePlatformID string `yaml:"module_platform_id"`
Product string `yaml:"product"`
OSTreeRefTmpl string `yaml:"ostree_ref_tmpl"`
Runner runner.RunnerConf `yaml:"runner"`
// ISOLabelTmpl can contain {{.Product}},{{.OsVersion}},{{.Arch}},{{.ImgTypeLabel}}
ISOLabelTmpl string `yaml:"iso_label_tmpl"`
DefaultFSType disk.FSType `yaml:"default_fs_type"`
// directory with the actual image defintions, we separate that
// so that we can point the "centos-10" distro to the "./rhel-10"
// image types file/directory.
DefsPath string `yaml:"defs_path"`
BootstrapContainers map[arch.Arch]string `yaml:"bootstrap_containers"`
OscapProfilesAllowList []oscap.Profile `yaml:"oscap_profiles_allowlist"`
}
func executeTemplates(d *DistroYAML, nameVer string) error {
id, err := distro.ParseID(nameVer)
if err != nil {
return err
}
var errs []error
subs := func(inp string) string {
var buf bytes.Buffer
templ, err := template.New("").Parse(inp)
if err != nil {
errs = append(errs, err)
return inp
}
if err := templ.Execute(&buf, id); err != nil {
errs = append(errs, err)
return inp
}
return buf.String()
}
d.Name = subs(d.Name)
d.OsVersion = subs(d.OsVersion)
d.ReleaseVersion = subs(d.ReleaseVersion)
d.OSTreeRefTmpl = subs(d.OSTreeRefTmpl)
d.ModulePlatformID = subs(d.ModulePlatformID)
d.Runner.Name = subs(d.Runner.Name)
for a := range d.BootstrapContainers {
d.BootstrapContainers[a] = subs(d.BootstrapContainers[a])
}
return errors.Join(errs...)
}
// Distro return the given distro or nil if the distro is not
// found. This mimics the "distrofactory.GetDistro() interface.
//
// Note that eventually we want something like "Distros()" instead
// that returns all known distros but for now we keep compatibility
// with the way distrofactory/reporegistry work which is by defining
// distros via repository files.
func Distro(nameVer string) (*DistroYAML, error) {
f, err := dataFS().Open("distros.yaml")
if err != nil {
return nil, err
}
defer f.Close()
decoder := yaml.NewDecoder(f)
decoder.KnownFields(true)
var distros distrosYAML
if err := decoder.Decode(&distros); err != nil {
return nil, err
}
for _, distro := range distros.Distros {
if distro.Name == nameVer {
return &distro, nil
}
pat, err := glob.Compile(distro.Match)
if err != nil {
return nil, err
}
if pat.Match(nameVer) {
if err := executeTemplates(&distro, nameVer); err != nil {
return nil, err
}
return &distro, nil
}
}
return nil, nil
}
// imageTypesYAML describes the image types for a given distribution
// family. Note that multiple distros may use the same image types,
// e.g. centos/rhel
type imageTypesYAML struct {
ImageConfig distroImageConfig `yaml:"image_config,omitempty"` ImageConfig distroImageConfig `yaml:"image_config,omitempty"`
ImageTypes map[string]imageType `yaml:"image_types"` ImageTypes map[string]imageType `yaml:"image_types"`
Common map[string]any `yaml:".common,omitempty"` Common map[string]any `yaml:".common,omitempty"`
@ -85,8 +227,16 @@ type imageType struct {
BootISO bool `yaml:"boot_iso"` BootISO bool `yaml:"boot_iso"`
ISOLabel string `yaml:"iso_label"` ISOLabel string `yaml:"iso_label"`
// XXX: or iso_variant?
Variant string `yaml:"variant"`
RPMOSTree bool `yaml:"rpm_ostree"` RPMOSTree bool `yaml:"rpm_ostree"`
OSTree struct {
Name string `yaml:"name"`
Remote string `yaml:"remote"`
} `yaml:"ostree"`
DefaultSize uint64 `yaml:"default_size"` DefaultSize uint64 `yaml:"default_size"`
// the image func name: disk,container,live-installer,... // the image func name: disk,container,live-installer,...
Image string `yaml:"image_func"` Image string `yaml:"image_func"`
@ -195,11 +345,14 @@ func DistroImageConfig(distroNameVer string) (*distro.ImageConfig, error) {
cond := toplevel.ImageConfig.Condition cond := toplevel.ImageConfig.Condition
if cond != nil { if cond != nil {
distroName, _ := splitDistroNameVer(distroNameVer) id, err := distro.ParseID(distroNameVer)
if err != nil {
return nil, err
}
// XXX: we shoudl probably use a similar pattern like // XXX: we shoudl probably use a similar pattern like
// for the partition table overrides (via // for the partition table overrides (via
// findElementIndexByJSONTag) but this if fine for now // findElementIndexByJSONTag) but this if fine for now
if distroNameCnf, ok := cond.DistroName[distroName]; ok { if distroNameCnf, ok := cond.DistroName[id.Name]; ok {
imgConfig = distroNameCnf.InheritFrom(imgConfig) imgConfig = distroNameCnf.InheritFrom(imgConfig)
} }
} }
@ -209,14 +362,17 @@ func DistroImageConfig(distroNameVer string) (*distro.ImageConfig, error) {
// PackageSets loads the PackageSets from the yaml source file // PackageSets loads the PackageSets from the yaml source file
// discovered via the imagetype. // discovered via the imagetype.
func PackageSets(it distro.ImageType, replacements map[string]string) (map[string]rpmmd.PackageSet, error) { func PackageSets(it distro.ImageType) (map[string]rpmmd.PackageSet, error) {
typeName := it.Name() typeName := it.Name()
arch := it.Arch() arch := it.Arch()
archName := arch.Name() archName := arch.Name()
distribution := arch.Distro() distribution := arch.Distro()
distroNameVer := distribution.Name() distroNameVer := distribution.Name()
distroName, distroVersion := splitDistroNameVer(distroNameVer) id, err := distro.ParseID(distroNameVer)
if err != nil {
return nil, err
}
// each imagetype can have multiple package sets, so that we can // each imagetype can have multiple package sets, so that we can
// use yaml aliases/anchors to de-duplicate them // use yaml aliases/anchors to de-duplicate them
@ -247,7 +403,7 @@ func PackageSets(it distro.ImageType, replacements map[string]string) (map[strin
Exclude: archSet.Exclude, Exclude: archSet.Exclude,
}) })
} }
if distroNameSet, ok := pkgSet.Condition.DistroName[distroName]; ok { if distroNameSet, ok := pkgSet.Condition.DistroName[id.Name]; ok {
rpmmdPkgSet = rpmmdPkgSet.Append(rpmmd.PackageSet{ rpmmdPkgSet = rpmmdPkgSet.Append(rpmmd.PackageSet{
Include: distroNameSet.Include, Include: distroNameSet.Include,
Exclude: distroNameSet.Exclude, Exclude: distroNameSet.Exclude,
@ -257,10 +413,7 @@ func PackageSets(it distro.ImageType, replacements map[string]string) (map[strin
// packageSets are strictly additive the order // packageSets are strictly additive the order
// is irrelevant // is irrelevant
for ltVer, ltSet := range pkgSet.Condition.VersionLessThan { for ltVer, ltSet := range pkgSet.Condition.VersionLessThan {
if r, ok := replacements[ltVer]; ok { if common.VersionLessThan(id.VersionString(), ltVer) {
ltVer = r
}
if common.VersionLessThan(distroVersion, ltVer) {
rpmmdPkgSet = rpmmdPkgSet.Append(rpmmd.PackageSet{ rpmmdPkgSet = rpmmdPkgSet.Append(rpmmd.PackageSet{
Include: ltSet.Include, Include: ltSet.Include,
Exclude: ltSet.Exclude, Exclude: ltSet.Exclude,
@ -269,10 +422,7 @@ func PackageSets(it distro.ImageType, replacements map[string]string) (map[strin
} }
for gteqVer, gteqSet := range pkgSet.Condition.VersionGreaterOrEqual { for gteqVer, gteqSet := range pkgSet.Condition.VersionGreaterOrEqual {
if r, ok := replacements[gteqVer]; ok { if common.VersionGreaterThanOrEqual(id.VersionString(), gteqVer) {
gteqVer = r
}
if common.VersionGreaterThanOrEqual(distroVersion, gteqVer) {
rpmmdPkgSet = rpmmdPkgSet.Append(rpmmd.PackageSet{ rpmmdPkgSet = rpmmdPkgSet.Append(rpmmd.PackageSet{
Include: gteqSet.Include, Include: gteqSet.Include,
Exclude: gteqSet.Exclude, Exclude: gteqSet.Exclude,
@ -291,7 +441,7 @@ func PackageSets(it distro.ImageType, replacements map[string]string) (map[strin
} }
// PartitionTable returns the partionTable for the given distro/imgType. // PartitionTable returns the partionTable for the given distro/imgType.
func PartitionTable(it distro.ImageType, replacements map[string]string) (*disk.PartitionTable, error) { func PartitionTable(it distro.ImageType) (*disk.PartitionTable, error) {
distroNameVer := it.Arch().Distro().Name() distroNameVer := it.Arch().Distro().Name()
toplevel, err := load(distroNameVer) toplevel, err := load(distroNameVer)
@ -309,118 +459,150 @@ func PartitionTable(it distro.ImageType, replacements map[string]string) (*disk.
arch := it.Arch() arch := it.Arch()
archName := arch.Name() archName := arch.Name()
pt, ok := imgType.PartitionTables[archName]
if !ok {
return nil, fmt.Errorf("%w (%q): %q", ErrNoPartitionTableForArch, it.Name(), archName)
}
if imgType.PartitionTablesOverrides != nil { if imgType.PartitionTablesOverrides != nil {
cond := imgType.PartitionTablesOverrides.Condition cond := imgType.PartitionTablesOverrides.Condition
distroName, distroVersion := splitDistroNameVer(it.Arch().Distro().Name()) id, err := distro.ParseID(it.Arch().Distro().Name())
if err != nil {
return nil, err
}
for _, ltVer := range versionLessThanSortedKeys(cond.VersionLessThan) { for _, ltVer := range versionLessThanSortedKeys(cond.VersionLessThan) {
ltOverrides := cond.VersionLessThan[ltVer] ltOverrides := cond.VersionLessThan[ltVer]
if r, ok := replacements[ltVer]; ok { if common.VersionLessThan(id.VersionString(), ltVer) {
ltVer = r if newPt, ok := ltOverrides[archName]; ok {
} pt = newPt
if common.VersionLessThan(distroVersion, ltVer) {
for arch, overridePt := range ltOverrides {
imgType.PartitionTables[arch] = overridePt
} }
} }
} }
for _, gteqVer := range backward(versionLessThanSortedKeys(cond.VersionGreaterOrEqual)) { for _, gteqVer := range backward(versionLessThanSortedKeys(cond.VersionGreaterOrEqual)) {
geOverrides := cond.VersionGreaterOrEqual[gteqVer] geOverrides := cond.VersionGreaterOrEqual[gteqVer]
if r, ok := replacements[gteqVer]; ok { if common.VersionGreaterThanOrEqual(id.VersionString(), gteqVer) {
gteqVer = r if newPt, ok := geOverrides[archName]; ok {
} pt = newPt
if common.VersionGreaterThanOrEqual(distroVersion, gteqVer) {
for arch, overridePt := range geOverrides {
imgType.PartitionTables[arch] = overridePt
} }
} }
} }
if distroNameOverrides, ok := cond.DistroName[distroName]; ok { if distroNameOverrides, ok := cond.DistroName[id.Name]; ok {
for arch, overridePt := range distroNameOverrides { if newPt, ok := distroNameOverrides[archName]; ok {
imgType.PartitionTables[arch] = overridePt pt = newPt
} }
} }
} }
pt, ok := imgType.PartitionTables[archName]
if !ok {
return nil, fmt.Errorf("%w (%q): %q", ErrNoPartitionTableForArch, it.Name(), archName)
}
return pt, nil return pt, nil
} }
func splitDistroNameVer(distroNameVer string) (string, string) { // Cache the toplevel structure, loading/parsing YAML is quite
// we need to split from the right for "centos-stream-10" like // expensive. This can all be removed in the future where there
// distro names, sadly go has no rsplit() so we do it manually // is a single load for each distroNameVer. Right now the various
// XXX: we cannot use distroidparser here because of import cycles // helpers (like ParititonTable(), ImageConfig() are called a
idx := strings.LastIndex(distroNameVer, "-") // gazillion times. However once we move into the "generic" distro
return distroNameVer[:idx], distroNameVer[idx+1:] // the distro will do a single load/parse of all image types and
// just reuse them and this can go.
type imageTypesCache struct {
cache map[string]*imageTypesYAML
mu sync.Mutex
} }
func load(distroNameVer string) (*toplevelYAML, error) { func newImageTypesCache() *imageTypesCache {
// we need to split from the right for "centos-stream-10" like return &imageTypesCache{cache: make(map[string]*imageTypesYAML)}
// distro names, sadly go has no rsplit() so we do it manually }
// XXX: we cannot use distroidparser here because of import cycles
distroName, distroVersion := splitDistroNameVer(distroNameVer)
distroNameMajorVer := strings.SplitN(distroNameVer, ".", 2)[0]
distroMajorVer := strings.SplitN(distroVersion, ".", 2)[0]
// XXX: this is a short term measure, pass a set of func (i *imageTypesCache) Get(hash string) *imageTypesYAML {
// searchPaths down the stack instead i.mu.Lock()
var dataFS fs.FS = DataFS defer i.mu.Unlock()
if overrideDir := experimentalflags.String("yamldir"); overrideDir != "" {
olog.Printf("WARNING: using experimental override dir %q", overrideDir) return i.cache[hash]
dataFS = os.DirFS(overrideDir) }
func (i *imageTypesCache) Set(hash string, ity *imageTypesYAML) {
i.mu.Lock()
defer i.mu.Unlock()
i.cache[hash] = ity
}
var (
itCache = newImageTypesCache()
)
func load(distroNameVer string) (*imageTypesYAML, error) {
id, err := distro.ParseID(distroNameVer)
if err != nil {
return nil, err
} }
// XXX: this is only needed temporary until we have a "distros.yaml" // XXX: this is only needed temporary until we have a "distros.yaml"
// that describes some high-level properties of each distro // that describes some high-level properties of each distro
// (like their yaml dirs) // (like their yaml dirs)
var baseDir string var baseDir string
switch distroName { switch id.Name {
case "rhel": case "rhel", "almalinux", "centos", "almalinux_kitten":
// rhel yaml files are under ./rhel-$majorVer // rhel yaml files are under ./rhel-$majorVer
baseDir = distroNameMajorVer
case "almalinux":
// almalinux yaml is just rhel, we take only its major version // almalinux yaml is just rhel, we take only its major version
baseDir = fmt.Sprintf("rhel-%s", distroMajorVer)
case "centos", "almalinux_kitten":
// centos and kitten yaml is just rhel but we have (sadly) no // centos and kitten yaml is just rhel but we have (sadly) no
// symlinks in "go:embed" so we have to have this slightly ugly // symlinks in "go:embed" so we have to have this slightly ugly
// workaround // workaround
baseDir = fmt.Sprintf("rhel-%s", distroVersion) baseDir = fmt.Sprintf("rhel-%v", id.MajorVersion)
case "fedora", "test-distro": case "test-distro":
// our other distros just have a single yaml dir per distro // our other distros just have a single yaml dir per distro
// and use condition.version_gt etc // and use condition.version_gt etc
baseDir = distroName baseDir = id.Name
default:
return nil, fmt.Errorf("unsupported distro in loader %q (add to loader.go)", distroName)
} }
f, err := dataFS.Open(filepath.Join(baseDir, "distro.yaml")) // take the base path from the distros.yaml
distro, err := Distro(distroNameVer)
if err != nil && !os.IsNotExist(err) {
return nil, err
}
if distro != nil && distro.DefsPath != "" {
baseDir = distro.DefsPath
}
f, err := dataFS().Open(filepath.Join(baseDir, "distro.yaml"))
if err != nil { if err != nil {
return nil, err return nil, err
} }
defer f.Close() defer f.Close()
decoder := yaml.NewDecoder(f) // XXX: this is currently needed because rhel distros call
decoder.KnownFields(true) // ImageType() and ParitionTable() a gazillion times and
// each time the full yaml is loaded. Once things move to
// the "generic" distro this will no longer be the case and
// this cache can be removed and below we can decode directly
// from "f" again instead of wasting memory with "buf"
var buf bytes.Buffer
h := sha256.New()
if _, err := io.Copy(io.MultiWriter(&buf, h), f); err != nil {
return nil, fmt.Errorf("cannot read from %s: %w", baseDir, err)
}
inputHash := string(h.Sum(nil))
if cached := itCache.Get(inputHash); cached != nil {
return cached, nil
}
// each imagetype can have multiple package sets, so that we can var toplevel imageTypesYAML
// use yaml aliases/anchors to de-duplicate them decoder := yaml.NewDecoder(&buf)
var toplevel toplevelYAML decoder.KnownFields(true)
if err := decoder.Decode(&toplevel); err != nil { if err := decoder.Decode(&toplevel); err != nil {
return nil, err return nil, err
} }
// XXX: remove once we no longer need caching
itCache.Set(inputHash, &toplevel)
return &toplevel, nil return &toplevel, nil
} }
// ImageConfig returns the image type specific ImageConfig // ImageConfig returns the image type specific ImageConfig
func ImageConfig(distroNameVer, archName, typeName string, replacements map[string]string) (*distro.ImageConfig, error) { func ImageConfig(distroNameVer, archName, typeName string) (*distro.ImageConfig, error) {
toplevel, err := load(distroNameVer) toplevel, err := load(distroNameVer)
if err != nil { if err != nil {
return nil, err return nil, err
@ -432,20 +614,21 @@ func ImageConfig(distroNameVer, archName, typeName string, replacements map[stri
imgConfig := imgType.ImageConfig.ImageConfig imgConfig := imgType.ImageConfig.ImageConfig
cond := imgType.ImageConfig.Condition cond := imgType.ImageConfig.Condition
if cond != nil { if cond != nil {
distroName, distroVersion := splitDistroNameVer(distroNameVer) id, err := distro.ParseID(distroNameVer)
if err != nil {
return nil, err
}
if distroNameCnf, ok := cond.DistroName[distroName]; ok { if distroNameCnf, ok := cond.DistroName[id.Name]; ok {
imgConfig = distroNameCnf.InheritFrom(imgConfig) imgConfig = distroNameCnf.InheritFrom(imgConfig)
} }
if archCnf, ok := cond.Architecture[archName]; ok { if archCnf, ok := cond.Architecture[archName]; ok {
imgConfig = archCnf.InheritFrom(imgConfig) imgConfig = archCnf.InheritFrom(imgConfig)
} }
for ltVer, ltConf := range cond.VersionLessThan { for _, ltVer := range versionLessThanSortedKeys(cond.VersionLessThan) {
if r, ok := replacements[ltVer]; ok { ltOverrides := cond.VersionLessThan[ltVer]
ltVer = r if common.VersionLessThan(id.VersionString(), ltVer) {
} imgConfig = ltOverrides.InheritFrom(imgConfig)
if common.VersionLessThan(distroVersion, ltVer) {
imgConfig = ltConf.InheritFrom(imgConfig)
} }
} }
} }
@ -468,7 +651,7 @@ func nNonEmpty[K comparable, V any](maps ...map[K]V) int {
// InstallerConfig returns the InstallerConfig for the given imgType // InstallerConfig returns the InstallerConfig for the given imgType
// Note that on conditions the InstallerConfig is fully replaced, do // Note that on conditions the InstallerConfig is fully replaced, do
// any merging in YAML // any merging in YAML
func InstallerConfig(distroNameVer, archName, typeName string, replacements map[string]string) (*distro.InstallerConfig, error) { func InstallerConfig(distroNameVer, archName, typeName string) (*distro.InstallerConfig, error) {
toplevel, err := load(distroNameVer) toplevel, err := load(distroNameVer)
if err != nil { if err != nil {
return nil, err return nil, err
@ -484,20 +667,21 @@ func InstallerConfig(distroNameVer, archName, typeName string, replacements map[
return nil, fmt.Errorf("only a single conditional allowed in installer config for %v", typeName) return nil, fmt.Errorf("only a single conditional allowed in installer config for %v", typeName)
} }
distroName, distroVersion := splitDistroNameVer(distroNameVer) id, err := distro.ParseID(distroNameVer)
if err != nil {
return nil, err
}
if distroNameCnf, ok := cond.DistroName[distroName]; ok { if distroNameCnf, ok := cond.DistroName[id.Name]; ok {
installerConfig = distroNameCnf installerConfig = distroNameCnf
} }
if archCnf, ok := cond.Architecture[archName]; ok { if archCnf, ok := cond.Architecture[archName]; ok {
installerConfig = archCnf installerConfig = archCnf
} }
for ltVer, ltConf := range cond.VersionLessThan { for _, ltVer := range versionLessThanSortedKeys(cond.VersionLessThan) {
if r, ok := replacements[ltVer]; ok { ltOverrides := cond.VersionLessThan[ltVer]
ltVer = r if common.VersionLessThan(id.VersionString(), ltVer) {
} installerConfig = ltOverrides
if common.VersionLessThan(distroVersion, ltVer) {
installerConfig = ltConf
} }
} }
} }

View file

@ -86,7 +86,8 @@
value: "4194304" value: "4194304"
- key: "vm.max_map_count" - key: "vm.max_map_count"
value: "2147483647" value: "2147483647"
dnf_set_release_ver_var: true dnf_config:
set_release_ver_var: true
sap_pkgset: &sap_pkgset sap_pkgset: &sap_pkgset
include: include:
@ -224,6 +225,7 @@
- &filesystem_data_guid "0FC63DAF-8483-4772-8E79-3D69D8477DE4" - &filesystem_data_guid "0FC63DAF-8483-4772-8E79-3D69D8477DE4"
- &xboot_ldr_partition_guid "BC13C2FF-59E6-4262-A352-B275FD6F7172" - &xboot_ldr_partition_guid "BC13C2FF-59E6-4262-A352-B275FD6F7172"
- &lvm_partition_guid "E6D6D379-F507-44C2-A23C-238F2A3DF928" - &lvm_partition_guid "E6D6D379-F507-44C2-A23C-238F2A3DF928"
- &root_partition_x86_64_guid "4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709"
# static UUIDs for partitions and filesystems # static UUIDs for partitions and filesystems
# NOTE(akoutsou): These are unnecessary and have stuck around since the # NOTE(akoutsou): These are unnecessary and have stuck around since the
# beginning where (I believe) the goal was to have predictable, # beginning where (I believe) the goal was to have predictable,
@ -242,12 +244,12 @@
uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0" uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0"
type: "gpt" type: "gpt"
partitions: partitions:
- size: 1_048_576 # 1 MiB - size: "1 MiB"
bootable: true bootable: true
type: *bios_boot_partition_guid type: *bios_boot_partition_guid
uuid: *bios_boot_partition_uuid uuid: *bios_boot_partition_uuid
- &default_partition_table_part_efi - &default_partition_table_part_efi
size: 209_715_200 # 200 MiB size: "200 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -260,7 +262,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &default_partition_table_part_root - &default_partition_table_part_root
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -281,11 +283,11 @@
uuid: "0x14fc63d2" uuid: "0x14fc63d2"
type: "dos" type: "dos"
partitions: partitions:
- size: 4_194_304 # 4 MiB - size: "4 MiB"
bootable: true bootable: true
type: *prep_partition_dosid type: *prep_partition_dosid
- &default_partition_table_part_root_ppc64le - &default_partition_table_part_root_ppc64le
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
<<: *default_partition_table_part_root_payload <<: *default_partition_table_part_root_payload
@ -297,6 +299,193 @@
- <<: *default_partition_table_part_root_ppc64le - <<: *default_partition_table_part_root_ppc64le
bootable: true bootable: true
azure_image_config: &azure_image_config
# from CreateAzureDatalossWarningScriptAndUnit
files:
- path: &dataloss_script "/usr/local/sbin/temp-disk-dataloss-warning"
mode: 0755
data: |
#!/bin/sh
# /usr/local/sbin/temp-disk-dataloss-warning
# Write dataloss warning file on mounted Azure resource disk
AZURE_RESOURCE_DISK_PART1="/dev/disk/cloud/azure_resource-part1"
MOUNTPATH=$(grep "$AZURE_RESOURCE_DISK_PART1" /etc/fstab | tr '\t' ' ' | cut -d' ' -f2)
if [ -z "$MOUNTPATH" ]; then
echo "There is no mountpoint of $AZURE_RESOURCE_DISK_PART1 in /etc/fstab"
exit 0
fi
if [ "$MOUNTPATH" = "none" ]; then
echo "Mountpoint of $AZURE_RESOURCE_DISK_PART1 is not a path"
exit 1
fi
if ! mountpoint -q "$MOUNTPATH"; then
echo "$AZURE_RESOURCE_DISK_PART1 is not mounted at $MOUNTPATH"
exit 1
fi
echo "Creating a dataloss warning file at ${MOUNTPATH}/DATALOSS_WARNING_README.txt"
cat <<'EOF' > "${MOUNTPATH}/DATALOSS_WARNING_README.txt"
WARNING: THIS IS A TEMPORARY DISK.
Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT.
Please do not use this disk for storing any personal or application data.
EOF
systemd_unit:
- filename: &dataloss_systemd_unit_filename "temp-disk-dataloss-warning.service"
"unit-type": "system"
"unit-path": "etc"
config:
"Unit":
Description: "Azure temporary resource disk dataloss warning file creation"
After: ["multi-user.target", "cloud-final.service"]
"Service":
Type: "oneshot"
ExecStart: [*dataloss_script]
StandardOutput: "journal+console"
"Install":
WantedBy: ["default.target"]
keyboard:
keymap: "us"
"x11-keymap":
layouts: ["us"]
update_default_kernel: true
default_kernel: "kernel-core"
sysconfig:
networking: true
no_zero_conf: true
enabled_services:
- "firewalld"
- "nm-cloud-setup.service"
- "nm-cloud-setup.timer"
- "sshd"
- "waagent"
- *dataloss_systemd_unit_filename
sshd_config:
config:
ClientAliveInterval: 180
modprobe:
- filename: "blacklist-amdgpu.conf"
commands:
- command: blacklist
modulename: "amdgpu"
- filename: "blacklist-intel-cstate.conf"
commands:
- command: blacklist
modulename: "intel_cstate"
- filename: "blacklist-floppy.conf"
commands:
- command: blacklist
modulename: "floppy"
- filename: "blacklist-nouveau.conf"
commands:
- command: blacklist
modulename: "nouveau"
- command: blacklist
modulename: "lbm-nouveau"
- filename: "blacklist-skylake-edac.conf"
commands:
- command: blacklist
modulename: "skx_edac"
- filename: "blacklist-intel_uncore.conf"
commands:
- command: blacklist
modulename: "intel_uncore"
- filename: "blacklist-acpi_cpufreq.conf"
commands:
- command: blacklist
modulename: "acpi_cpufreq"
pwquality:
config:
minlen: 6
minclass: 3
dcredit: 0
ucredit: 0
lcredit: 0
ocredit: 0
waagent_config:
config:
"ResourceDisk.Format": false
"ResourceDisk.EnableSwap": false
"Provisioning.UseCloudInit": true
"Provisioning.Enabled": false
grub2_config:
disable_recovery: true
disable_submenu: true
distributor: "$(sed 's, release .*$,,g' /etc/system-release)"
terminal: ["serial", "console"]
serial: "serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
timeout: 10
timeout_style: "countdown"
udev_rules:
filename: "/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules"
rules:
- comment:
- "Accelerated Networking on Azure exposes a new SRIOV interface to the VM."
- "This interface is transparently bonded to the synthetic interface,"
- "so NetworkManager should just ignore any SRIOV interfaces."
- rule:
- K: "SUBSYSTEM"
O: "=="
V: "net"
- K: "DRIVERS"
O: "=="
V: "hv_pci"
- K: "ACTION"
O: "=="
V: "add"
- K: "ENV"
A: "NM_UNMANAGED"
O: "="
V: "1"
systemd_dropin:
- unit: "nm-cloud-setup.service"
dropin: "10-rh-enable-for-azure.conf"
config:
service:
environment:
- key: "NM_CLOUD_SETUP_AZURE"
value: "yes"
default_target: "multi-user.target"
network_manager:
path: "/etc/NetworkManager/conf.d/99-azure-unmanaged-devices.conf"
settings:
keyfile:
"unmanaged-devices":
- "driver:mlx4_core"
- "driver:mlx5_core"
condition:
distro_name:
rhel:
gpgkey_files:
- "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
architecture:
x86_64:
kernel_options:
# common
- "ro"
- "loglevel=3"
- "nvme_core.io_timeout=240"
# x86
- "console=tty1"
- "console=ttyS0"
- "earlyprintk=ttyS0"
- "rootdelay=300"
aarch64:
kernel_options:
# common
- "ro"
- "loglevel=3"
- "nvme_core.io_timeout=240"
# aarch64
- "console=ttyAMA0"
image_config: image_config:
default: default:
default_kernel: "kernel" default_kernel: "kernel"
@ -450,107 +639,15 @@ image_types:
vhd: &vhd vhd: &vhd
# based on https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_microsoft_azure/assembly_deploying-a-rhel-image-as-a-virtual-machine-on-microsoft-azure_cloud-content-azure#making-configuration-changes_configure-the-image-azure # based on https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_microsoft_azure/assembly_deploying-a-rhel-image-as-a-virtual-machine-on-microsoft-azure_cloud-content-azure#making-configuration-changes_configure-the-image-azure
image_config: &image_config_vhd image_config: &image_config_vhd
# from CreateAzureDatalossWarningScriptAndUnit <<: *azure_image_config
files: time_synchronization:
- path: &dataloss_script "/usr/local/sbin/temp-disk-dataloss-warning" refclocks:
mode: 0755 - driver:
data: | name: "PHC"
#!/bin/sh path: "/dev/ptp_hyperv"
# /usr/local/sbin/temp-disk-dataloss-warning poll: 3
# Write dataloss warning file on mounted Azure resource disk dpoll: -2
offset: 0.0
AZURE_RESOURCE_DISK_PART1="/dev/disk/cloud/azure_resource-part1"
MOUNTPATH=$(grep "$AZURE_RESOURCE_DISK_PART1" /etc/fstab | tr '\t' ' ' | cut -d' ' -f2)
if [ -z "$MOUNTPATH" ]; then
echo "There is no mountpoint of $AZURE_RESOURCE_DISK_PART1 in /etc/fstab"
exit 0
fi
if [ "$MOUNTPATH" = "none" ]; then
echo "Mountpoint of $AZURE_RESOURCE_DISK_PART1 is not a path"
exit 1
fi
if ! mountpoint -q "$MOUNTPATH"; then
echo "$AZURE_RESOURCE_DISK_PART1 is not mounted at $MOUNTPATH"
exit 1
fi
echo "Creating a dataloss warning file at ${MOUNTPATH}/DATALOSS_WARNING_README.txt"
cat <<'EOF' > "${MOUNTPATH}/DATALOSS_WARNING_README.txt"
WARNING: THIS IS A TEMPORARY DISK.
Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT.
Please do not use this disk for storing any personal or application data.
EOF
systemd_unit:
- filename: &dataloss_systemd_unit_filename "temp-disk-dataloss-warning.service"
"unit-type": "system"
"unit-path": "etc"
config:
"Unit":
Description: "Azure temporary resource disk dataloss warning file creation"
After: ["multi-user.target", "cloud-final.service"]
"Service":
Type: "oneshot"
ExecStart: [*dataloss_script]
StandardOutput: "journal+console"
"Install":
WantedBy: ["default.target"]
keyboard:
keymap: "us"
"x11-keymap":
layouts: ["us"]
update_default_kernel: true
default_kernel: "kernel-core"
sysconfig:
networking: true
no_zero_conf: true
enabled_services:
- "firewalld"
- "nm-cloud-setup.service"
- "nm-cloud-setup.timer"
- "sshd"
- "waagent"
- *dataloss_systemd_unit_filename
sshd_config:
config:
ClientAliveInterval: 180
modprobe:
- filename: "blacklist-amdgpu.conf"
commands:
- command: blacklist
modulename: "amdgpu"
- filename: "blacklist-intel-cstate.conf"
commands:
- command: blacklist
modulename: "intel_cstate"
- filename: "blacklist-floppy.conf"
commands:
- command: blacklist
modulename: "floppy"
- filename: "blacklist-nouveau.conf"
commands:
- command: blacklist
modulename: "nouveau"
- command: blacklist
modulename: "lbm-nouveau"
- filename: "blacklist-skylake-edac.conf"
commands:
- command: blacklist
modulename: "skx_edac"
- filename: "blacklist-intel_uncore.conf"
commands:
- command: blacklist
modulename: "intel_uncore"
- filename: "blacklist-acpi_cpufreq.conf"
commands:
- command: blacklist
modulename: "acpi_cpufreq"
cloud_init: cloud_init:
- filename: "10-azure-kvp.cfg" - filename: "10-azure-kvp.cfg"
config: config:
@ -566,98 +663,6 @@ image_types:
apply_network_config: false apply_network_config: false
datasource_list: datasource_list:
- "Azure" - "Azure"
pwquality:
config:
minlen: 6
minclass: 3
dcredit: 0
ucredit: 0
lcredit: 0
ocredit: 0
waagent_config:
config:
"ResourceDisk.Format": false
"ResourceDisk.EnableSwap": false
"Provisioning.UseCloudInit": true
"Provisioning.Enabled": false
grub2_config:
disable_recovery: true
disable_submenu: true
distributor: "$(sed 's, release .*$,,g' /etc/system-release)"
terminal: ["serial", "console"]
serial: "serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
timeout: 10
timeout_style: "countdown"
udev_rules:
filename: "/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules"
rules:
- comment:
- "Accelerated Networking on Azure exposes a new SRIOV interface to the VM."
- "This interface is transparently bonded to the synthetic interface,"
- "so NetworkManager should just ignore any SRIOV interfaces."
- rule:
- K: "SUBSYSTEM"
O: "=="
V: "net"
- K: "DRIVERS"
O: "=="
V: "hv_pci"
- K: "ACTION"
O: "=="
V: "add"
- K: "ENV"
A: "NM_UNMANAGED"
O: "="
V: "1"
systemd_dropin:
- unit: "nm-cloud-setup.service"
dropin: "10-rh-enable-for-azure.conf"
config:
service:
environment:
- key: "NM_CLOUD_SETUP_AZURE"
value: "yes"
default_target: "multi-user.target"
time_synchronization:
refclocks:
- driver:
name: "PHC"
path: "/dev/ptp_hyperv"
poll: 3
dpoll: -2
offset: 0.0
network_manager:
path: "/etc/NetworkManager/conf.d/99-azure-unmanaged-devices.conf"
settings:
keyfile:
"unmanaged-devices":
- "driver:mlx4_core"
- "driver:mlx5_core"
condition:
distro_name:
rhel:
gpgkey_files:
- "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
architecture:
x86_64:
kernel_options:
# common
- "ro"
- "loglevel=3"
- "nvme_core.io_timeout=240"
# x86
- "console=tty1"
- "console=ttyS0"
- "earlyprintk=ttyS0"
- "rootdelay=300"
aarch64:
kernel_options:
# common
- "ro"
- "loglevel=3"
- "nvme_core.io_timeout=240"
# aarch64
- "console=ttyAMA0"
partition_table: partition_table:
<<: *default_partition_tables <<: *default_partition_tables
package_sets: package_sets:
@ -746,10 +751,10 @@ image_types:
x86_64: x86_64:
uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0" uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0"
type: "gpt" type: "gpt"
size: 68_719_476_736 # 64 * datasizes.GibiByte size: "64 GiB"
partitions: partitions:
- &azure_rhui_part_boot_efi - &azure_rhui_part_boot_efi
size: 524_288_000 # 500 * datasizes.MebiByte size: "500 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
UUID: *efi_system_partition_uuid UUID: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -762,7 +767,7 @@ image_types:
fstab_passno: 2 fstab_passno: 2
# NB: we currently don't support /boot on LVM # NB: we currently don't support /boot on LVM
- &azure_rhui_part_boot - &azure_rhui_part_boot
size: 1_073_741_824 # 1 * datasizes.GibiByte size: "1 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -772,7 +777,7 @@ image_types:
fstab_options: "defaults" fstab_options: "defaults"
fstab_freq: 0 fstab_freq: 0
fstab_passno: 0 fstab_passno: 0
- size: 2_097_152 # 2 * datasizes.MebiByte - size: "2 MiB"
bootable: true bootable: true
type: *bios_boot_partition_guid type: *bios_boot_partition_guid
uuid: *bios_boot_partition_uuid uuid: *bios_boot_partition_uuid
@ -784,7 +789,7 @@ image_types:
name: "rootvg" name: "rootvg"
description: "built with lvm2 and osbuild" description: "built with lvm2 and osbuild"
logical_volumes: logical_volumes:
- size: 1_073_741_824 # 1 * datasizes.GibiByte - size: "1 GiB"
name: "homelv" name: "homelv"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
@ -792,7 +797,7 @@ image_types:
label: "home" label: "home"
mountpoint: "/home" mountpoint: "/home"
fstab_options: "defaults" fstab_options: "defaults"
- size: 2_147_483_648 # 2 * datasizes.GibiByte - size: "2 GiB"
name: "rootlv" name: "rootlv"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
@ -800,7 +805,7 @@ image_types:
label: "root" label: "root"
mountpoint: "/" mountpoint: "/"
fstab_options: "defaults" fstab_options: "defaults"
- size: 2_147_483_648 # 2 * datasizes.GibiByte - size: "2 GiB"
name: "tmplv" name: "tmplv"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
@ -808,7 +813,7 @@ image_types:
label: "tmp" label: "tmp"
mountpoint: "/tmp" mountpoint: "/tmp"
fstab_options: "defaults" fstab_options: "defaults"
- size: 10_737_418_240 # 10 * datasizes.GibiByte - size: "10 GiB"
name: "usrlv" name: "usrlv"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
@ -816,7 +821,7 @@ image_types:
label: "usr" label: "usr"
mountpoint: "/usr" mountpoint: "/usr"
fstab_options: "defaults" fstab_options: "defaults"
- size: 10_737_418_240 # 10 * datasizes.GibiByte - size: "10 GiB"
name: "varlv" name: "varlv"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
@ -827,7 +832,7 @@ image_types:
aarch64: aarch64:
uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0" uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0"
type: "gpt" type: "gpt"
size: 68_719_476_736 # 64 * datasizes.GibiByte size: "64 GiB"
partitions: partitions:
- *azure_rhui_part_boot_efi - *azure_rhui_part_boot_efi
# NB: we currently don't support /boot on LVM # NB: we currently don't support /boot on LVM
@ -1283,6 +1288,7 @@ image_types:
keyboard: keyboard:
keymap: "us" keymap: "us"
dnf_config: dnf_config:
options:
- config: - config:
main: main:
ipresolve: "4" ipresolve: "4"
@ -1415,3 +1421,69 @@ image_types:
rhel: rhel:
include: include:
- "insights-client" - "insights-client"
"azure-cvm":
image_config:
<<: *azure_image_config
default_kernel: "kernel-uki-virt"
default_kernel_name: "kernel-uki-virt"
no_bls: true
cloud_init:
- filename: "91-azure_datasource.cfg"
config:
datasource:
azure:
apply_network_config: false
datasource_list:
- "Azure"
package_sets:
os:
- include:
- "@minimal-environment"
- "chrony"
- "cloud-init"
- "cloud-utils-growpart"
- "cryptsetup"
- "NetworkManager-cloud-setup"
- "openssh-server"
- "redhat-cloud-client-configuration"
- "redhat-release"
- "tpm2-tools"
- "WALinuxAgent"
- exclude:
- "dracut-config-rescue"
- "grubby"
- "iwl*"
# In EL9 we exclude linux-firmware* (note the asterisk).
# In EL10, packages in the minimal-environment group require
# linux-firmware-whence, so we only exclude the linux-firmware
# package here.
- "linux-firmware"
- "os-prober"
partition_table:
x86_64:
uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0"
type: "gpt"
partitions:
- size: 264_241_152 # 52 MiB
type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid
payload_type: "filesystem"
payload:
type: vfat
uuid: *efi_filesystem_uuid
mountpoint: "/boot/efi"
label: "ESP"
fstab_options: "defaults,uid=0,gid=0,umask=077,shortname=winnt"
fstab_freq: 0
fstab_passno: 2
- size: 5_368_709_120 # 5 * datasizes.GibiByte,
type: *root_partition_x86_64_guid
payload_type: "filesystem"
payload:
type: "ext4"
label: "root"
mountpoint: "/"
fstab_options: "defaults"
fstab_freq: 0
fstab_passno: 0

View file

@ -72,12 +72,12 @@
uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0" uuid: "D209C89E-EA5E-4FBD-B161-B461CCE297E0"
type: "gpt" type: "gpt"
partitions: partitions:
- size: 1_048_576 # 1 MiB - size: "1 MiB"
bootable: true bootable: true
type: *bios_boot_partition_guid type: *bios_boot_partition_guid
uuid: *bios_boot_partition_uuid uuid: *bios_boot_partition_uuid
- &default_partition_table_part_efi - &default_partition_table_part_efi
size: 209_715_200 # 200 MiB size: "200 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -90,7 +90,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &default_partition_table_part_boot - &default_partition_table_part_boot
size: 524_288_000 # 500 * MiB size: "500 MiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -102,7 +102,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 0 fstab_passno: 0
- &default_partition_table_part_root - &default_partition_table_part_root
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"

View file

@ -555,12 +555,12 @@
partitions: partitions:
- &default_partition_table_part_bios - &default_partition_table_part_bios
size: 1_048_576 # 1 MiB size: "1 MiB"
bootable: true bootable: true
type: *bios_boot_partition_guid type: *bios_boot_partition_guid
uuid: *bios_boot_partition_uuid uuid: *bios_boot_partition_uuid
- &default_partition_table_part_efi - &default_partition_table_part_efi
size: 104_857_600 # 100 MiB size: "100 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -573,7 +573,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &default_partition_table_part_root - &default_partition_table_part_root
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -586,7 +586,7 @@
fstab_passno: 0 fstab_passno: 0
# ec2 # ec2
- &ec2_partition_table_part_boot - &ec2_partition_table_part_boot
size: 1_073_741_824 # 1 GiB size: "1 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -598,7 +598,7 @@
fstab_passno: 0 fstab_passno: 0
- &ec2_partition_table_part_boot512 - &ec2_partition_table_part_boot512
<<: *ec2_partition_table_part_boot <<: *ec2_partition_table_part_boot
size: 536_870_912 # 512MiB size: "512 MiB"
default_partition_tables: &default_partition_tables default_partition_tables: &default_partition_tables
x86_64: x86_64:
@ -618,11 +618,11 @@
uuid: "0x14fc63d2" uuid: "0x14fc63d2"
type: "dos" type: "dos"
partitions: partitions:
- size: 4_194_304 # 4 MiB - size: "4 MiB"
bootable: true bootable: true
type: *prep_partition_dosid type: *prep_partition_dosid
- &default_partition_table_part_root_ppc64le - &default_partition_table_part_root_ppc64le
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
<<: *default_partition_table_part_root_payload <<: *default_partition_table_part_root_payload
@ -641,7 +641,7 @@
partitions: partitions:
- *default_partition_table_part_bios - *default_partition_table_part_bios
- &edge_base_partition_table_part_efi - &edge_base_partition_table_part_efi
size: 133_169_152 # 127 MiB size: "127 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -654,7 +654,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &edge_base_partition_table_part_boot - &edge_base_partition_table_part_boot
size: 402_653_184 # 384 * MiB size: "384 MiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -666,7 +666,7 @@
fstab_freq: 1 fstab_freq: 1
fstab_passno: 1 fstab_passno: 1
- &edge_base_partition_table_part_root - &edge_base_partition_table_part_root
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "luks" payload_type: "luks"
@ -707,7 +707,7 @@
partitions: partitions:
- *default_partition_table_part_bios - *default_partition_table_part_bios
- &ec2_partition_table_part_efi - &ec2_partition_table_part_efi
size: 209_715_200 # 200 MiB size: "200 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -720,7 +720,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &ec2_partition_table_part_root - &ec2_partition_table_part_root
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"

View file

@ -391,12 +391,12 @@
partitions: partitions:
- &default_partition_table_part_bios - &default_partition_table_part_bios
size: 1_048_576 # 1 MiB size: "1 MiB"
bootable: true bootable: true
type: *bios_boot_partition_guid type: *bios_boot_partition_guid
uuid: *bios_boot_partition_uuid uuid: *bios_boot_partition_uuid
- &default_partition_table_part_efi - &default_partition_table_part_efi
size: 209_715_200 # 200 MiB size: "200 MiB"
type: *efi_system_partition_guid type: *efi_system_partition_guid
uuid: *efi_system_partition_uuid uuid: *efi_system_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -409,7 +409,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 2 fstab_passno: 2
- &default_partition_table_part_boot - &default_partition_table_part_boot
size: 1_073_741_824 # 1 GiB size: "1 GiB"
type: *xboot_ldr_partition_guid type: *xboot_ldr_partition_guid
uuid: *data_partition_uuid uuid: *data_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -421,7 +421,7 @@
fstab_freq: 0 fstab_freq: 0
fstab_passno: 0 fstab_passno: 0
- &default_partition_table_part_root - &default_partition_table_part_root
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
type: *filesystem_data_guid type: *filesystem_data_guid
uuid: *root_partition_uuid uuid: *root_partition_uuid
payload_type: "filesystem" payload_type: "filesystem"
@ -434,22 +434,22 @@
fstab_passno: 0 fstab_passno: 0
# ppc64 # ppc64
- &default_partition_table_part_bios_ppc64le - &default_partition_table_part_bios_ppc64le
size: 4_194_304 # 4 MiB size: "4 MiB"
bootable: true bootable: true
type: *prep_partition_dosid type: *prep_partition_dosid
- &default_partition_table_part_boot_ppc64le - &default_partition_table_part_boot_ppc64le
size: 1_073_741_824 # 1 GiB size: "1 GiB"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
<<: *default_partition_table_part_boot_payload <<: *default_partition_table_part_boot_payload
- &default_partition_table_part_boot512_ppc64le - &default_partition_table_part_boot512_ppc64le
<<: *default_partition_table_part_boot_ppc64le <<: *default_partition_table_part_boot_ppc64le
size: 524_288_000 # 500 MiB size: "500 MiB"
- &default_partition_table_part_boot600_ppc64le - &default_partition_table_part_boot600_ppc64le
<<: *default_partition_table_part_boot_ppc64le <<: *default_partition_table_part_boot_ppc64le
size: 629_145_600 # 600 MiB size: "600 MiB"
- &default_partition_table_part_root_ppc64le - &default_partition_table_part_root_ppc64le
size: 2_147_483_648 # 2 * datasizes.GibiByte, size: "2 GiB"
payload_type: "filesystem" payload_type: "filesystem"
payload: payload:
<<: *default_partition_table_part_root_payload <<: *default_partition_table_part_root_payload
@ -1182,3 +1182,25 @@ image_types:
*edge_commit_x86_64_pkgset *edge_commit_x86_64_pkgset
aarch64: aarch64:
*edge_commit_aarch64_pkgset *edge_commit_aarch64_pkgset
"azure-cvm":
package_sets:
os:
- include:
- "@minimal-environment"
- "chrony"
- "cloud-init"
- "cloud-utils-growpart"
- "cryptsetup"
- "NetworkManager-cloud-setup"
- "openssh-server"
- "redhat-cloud-client-configuration"
- "redhat-release"
- "tpm2-tools"
- "WALinuxAgent"
- exclude:
- "dracut-config-rescue"
- "iwl*"
- "linux-firmware*"
- "grubby"
- "os-prober"

View file

@ -1,290 +0,0 @@
package fedora
import (
"errors"
"fmt"
"sort"
"strconv"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/arch"
"github.com/osbuild/images/pkg/customizations/oscap"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distro/defs"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/runner"
)
const (
// package set names
// main/common os image package set name
osPkgsKey = "os"
// container package set name
containerPkgsKey = "container"
// installer package set name
installerPkgsKey = "installer"
// blueprint package set name
blueprintPkgsKey = "blueprint"
)
var (
oscapProfileAllowList = []oscap.Profile{
oscap.Ospp,
oscap.PciDss,
oscap.Standard,
}
)
type distribution struct {
name string
product string
osVersion string
releaseVersion string
modulePlatformID string
ostreeRefTmpl string
runner runner.Runner
arches map[string]distro.Arch
defaultImageConfig *distro.ImageConfig
}
func getISOLabelFunc(variant string) isoLabelFunc {
const ISO_LABEL = "%s-%s-%s-%s"
return func(t *imageType) string {
return fmt.Sprintf(ISO_LABEL, t.Arch().Distro().Product(), t.Arch().Distro().OsVersion(), variant, t.Arch().Name())
}
}
func getDistro(version int) distribution {
if version < 0 {
panic("Invalid Fedora version (must be positive)")
}
nameVer := fmt.Sprintf("fedora-%d", version)
return distribution{
name: nameVer,
product: "Fedora",
osVersion: strconv.Itoa(version),
releaseVersion: strconv.Itoa(version),
modulePlatformID: fmt.Sprintf("platform:f%d", version),
ostreeRefTmpl: fmt.Sprintf("fedora/%d/%%s/iot", version),
runner: &runner.Fedora{Version: uint64(version)},
defaultImageConfig: common.Must(defs.DistroImageConfig(nameVer)),
}
}
func (d *distribution) Name() string {
return d.name
}
func (d *distribution) Codename() string {
return "" // Fedora does not use distro codename
}
func (d *distribution) Releasever() string {
return d.releaseVersion
}
func (d *distribution) OsVersion() string {
return d.releaseVersion
}
func (d *distribution) Product() string {
return d.product
}
func (d *distribution) ModulePlatformID() string {
return d.modulePlatformID
}
func (d *distribution) OSTreeRef() string {
return d.ostreeRefTmpl
}
func (d *distribution) ListArches() []string {
archNames := make([]string, 0, len(d.arches))
for name := range d.arches {
archNames = append(archNames, name)
}
sort.Strings(archNames)
return archNames
}
func (d *distribution) GetArch(name string) (distro.Arch, error) {
arch, exists := d.arches[name]
if !exists {
return nil, errors.New("invalid architecture: " + name)
}
return arch, nil
}
func (d *distribution) addArches(arches ...architecture) {
if d.arches == nil {
d.arches = map[string]distro.Arch{}
}
// Do not make copies of architectures, as opposed to image types,
// because architecture definitions are not used by more than a single
// distro definition.
for idx := range arches {
d.arches[arches[idx].name] = &arches[idx]
}
}
func (d *distribution) getDefaultImageConfig() *distro.ImageConfig {
return d.defaultImageConfig
}
type architecture struct {
distro *distribution
name string
imageTypes map[string]distro.ImageType
imageTypeAliases map[string]string
}
func (a *architecture) Name() string {
return a.name
}
func (a *architecture) ListImageTypes() []string {
itNames := make([]string, 0, len(a.imageTypes))
for name := range a.imageTypes {
itNames = append(itNames, name)
}
sort.Strings(itNames)
return itNames
}
func (a *architecture) GetImageType(name string) (distro.ImageType, error) {
t, exists := a.imageTypes[name]
if !exists {
aliasForName, exists := a.imageTypeAliases[name]
if !exists {
return nil, errors.New("invalid image type: " + name)
}
t, exists = a.imageTypes[aliasForName]
if !exists {
panic(fmt.Sprintf("image type '%s' is an alias to a non-existing image type '%s'", name, aliasForName))
}
}
return t, nil
}
func (a *architecture) addImageTypes(platform platform.Platform, imageTypes ...imageType) {
if a.imageTypes == nil {
a.imageTypes = map[string]distro.ImageType{}
}
for idx := range imageTypes {
it := imageTypes[idx]
it.arch = a
it.platform = platform
a.imageTypes[it.name] = &it
for _, alias := range it.nameAliases {
if a.imageTypeAliases == nil {
a.imageTypeAliases = map[string]string{}
}
if existingAliasFor, exists := a.imageTypeAliases[alias]; exists {
panic(fmt.Sprintf("image type alias '%s' for '%s' is already defined for another image type '%s'", alias, it.name, existingAliasFor))
}
a.imageTypeAliases[alias] = it.name
}
}
}
func (a *architecture) Distro() distro.Distro {
return a.distro
}
func newDistro(version int) distro.Distro {
rd := getDistro(version)
// XXX: generate architecture automatically from the imgType yaml
x86_64 := architecture{
name: arch.ARCH_X86_64.String(),
distro: &rd,
}
aarch64 := architecture{
name: arch.ARCH_AARCH64.String(),
distro: &rd,
}
ppc64le := architecture{
distro: &rd,
name: arch.ARCH_PPC64LE.String(),
}
s390x := architecture{
distro: &rd,
name: arch.ARCH_S390X.String(),
}
riscv64 := architecture{
name: arch.ARCH_RISCV64.String(),
distro: &rd,
}
// XXX: move all image types should to YAML
its, err := defs.ImageTypes(rd.name)
if err != nil {
panic(err)
}
for _, imgTypeYAML := range its {
// use as marker for images that are not converted to
// YAML yet
if imgTypeYAML.Filename == "" {
continue
}
it := newImageTypeFrom(rd, imgTypeYAML)
for _, pl := range imgTypeYAML.Platforms {
switch pl.Arch {
case arch.ARCH_X86_64:
x86_64.addImageTypes(&pl, it)
case arch.ARCH_AARCH64:
aarch64.addImageTypes(&pl, it)
case arch.ARCH_PPC64LE:
ppc64le.addImageTypes(&pl, it)
case arch.ARCH_S390X:
s390x.addImageTypes(&pl, it)
case arch.ARCH_RISCV64:
riscv64.addImageTypes(&pl, it)
default:
err := fmt.Errorf("unsupported arch: %v", pl.Arch)
panic(err)
}
}
}
rd.addArches(x86_64, aarch64, ppc64le, s390x, riscv64)
return &rd
}
func ParseID(idStr string) (*distro.ID, error) {
id, err := distro.ParseID(idStr)
if err != nil {
return nil, err
}
if id.Name != "fedora" {
return nil, fmt.Errorf("invalid distro name: %s", id.Name)
}
if id.MinorVersion != -1 {
return nil, fmt.Errorf("fedora distro does not support minor versions")
}
return id, nil
}
func DistroFactory(idStr string) distro.Distro {
id, err := ParseID(idStr)
if err != nil {
return nil
}
return newDistro(id.MajorVersion)
}

View file

@ -1,80 +0,0 @@
package fedora
import (
"fmt"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distro/defs"
"github.com/osbuild/images/pkg/rpmmd"
)
func packageSetLoader(t *imageType) (map[string]rpmmd.PackageSet, error) {
return defs.PackageSets(t, VersionReplacements())
}
func imageConfig(d distribution, imageType string) *distro.ImageConfig {
// arch is currently not used in fedora
arch := ""
return common.Must(defs.ImageConfig(d.name, arch, imageType, VersionReplacements()))
}
func installerConfig(d distribution, imageType string) *distro.InstallerConfig {
// arch is currently not used in fedora
arch := ""
return common.Must(defs.InstallerConfig(d.name, arch, imageType, VersionReplacements()))
}
func newImageTypeFrom(d distribution, imgYAML defs.ImageTypeYAML) imageType {
it := imageType{
name: imgYAML.Name(),
nameAliases: imgYAML.NameAliases,
filename: imgYAML.Filename,
compression: imgYAML.Compression,
mimeType: imgYAML.MimeType,
bootable: imgYAML.Bootable,
bootISO: imgYAML.BootISO,
rpmOstree: imgYAML.RPMOSTree,
isoLabel: getISOLabelFunc(imgYAML.ISOLabel),
defaultSize: imgYAML.DefaultSize,
buildPipelines: imgYAML.BuildPipelines,
payloadPipelines: imgYAML.PayloadPipelines,
exports: imgYAML.Exports,
requiredPartitionSizes: imgYAML.RequiredPartitionSizes,
environment: &imgYAML.Environment,
}
// XXX: make this a helper on imgYAML()
it.defaultImageConfig = imageConfig(d, imgYAML.Name())
it.defaultInstallerConfig = installerConfig(d, imgYAML.Name())
it.packageSets = packageSetLoader
switch imgYAML.Image {
case "disk":
it.image = diskImage
case "container":
it.image = containerImage
case "image_installer":
it.image = imageInstallerImage
case "live_installer":
it.image = liveInstallerImage
case "bootable_container":
it.image = bootableContainerImage
case "iot":
it.image = iotImage
case "iot_commit":
it.image = iotCommitImage
case "iot_container":
it.image = iotContainerImage
case "iot_installer":
it.image = iotInstallerImage
case "iot_simplified_installer":
it.image = iotSimplifiedInstallerImage
case "tar":
it.image = tarImage
default:
err := fmt.Errorf("unknown image func: %v for %v", imgYAML.Image, imgYAML.Name())
panic(err)
}
return it
}

View file

@ -1,15 +0,0 @@
package fedora
const VERSION_BRANCHED = "43"
const VERSION_RAWHIDE = "43"
// Fedora 43 and later we reset the machine-id file to align ourselves with the
// other Fedora variants.
const VERSION_FIRSTBOOT = "43"
func VersionReplacements() map[string]string {
return map[string]string{
"VERSION_BRANCHED": VERSION_BRANCHED,
"VERSION_RAWHIDE": VERSION_RAWHIDE,
}
}

View file

@ -0,0 +1,238 @@
package generic
import (
"bytes"
"errors"
"fmt"
"sort"
"text/template"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distro/defs"
"github.com/osbuild/images/pkg/platform"
)
const (
// package set names
// main/common os image package set name
osPkgsKey = "os"
// container package set name
containerPkgsKey = "container"
// installer package set name
installerPkgsKey = "installer"
// blueprint package set name
blueprintPkgsKey = "blueprint"
)
var (
ErrDistroNotFound = errors.New("distribution not found")
)
// distribution implements the distro.Distro interface
var _ = distro.Distro(&distribution{})
type distribution struct {
defs.DistroYAML
arches map[string]*architecture
// XXX: move into defs.DistroYAML? the downside of doing this
// is that we would have to duplicate the default image config
// accross the centos/alma/rhel distros.yaml, otherwise we
// just load it from the imagetypes file/dir and it is natually
// "in-sync"
defaultImageConfig *distro.ImageConfig
}
func (d *distribution) getISOLabelFunc(isoLabel string) isoLabelFunc {
return func(t *imageType) string {
type inputs struct {
Product string
OsVersion string
Arch string
ImgTypeLabel string
}
templ := common.Must(template.New("iso-label").Parse(d.DistroYAML.ISOLabelTmpl))
var buf bytes.Buffer
err := templ.Execute(&buf, inputs{
Product: t.Arch().Distro().Product(),
OsVersion: t.Arch().Distro().OsVersion(),
Arch: t.Arch().Name(),
ImgTypeLabel: isoLabel,
})
if err != nil {
// XXX: cleanup isoLabelFunc to allow error
panic(err)
}
return buf.String()
}
}
func newDistro(nameVer string) (distro.Distro, error) {
distroYAML, err := defs.Distro(nameVer)
if err != nil {
return nil, err
}
if distroYAML == nil {
return nil, nil
}
rd := &distribution{
DistroYAML: *distroYAML,
defaultImageConfig: common.Must(defs.DistroImageConfig(nameVer)),
arches: make(map[string]*architecture),
}
its, err := defs.ImageTypes(rd.Name())
if err != nil {
return nil, err
}
for _, imgTypeYAML := range its {
// use as marker for images that are not converted to
// YAML yet
if imgTypeYAML.Filename == "" {
continue
}
for _, pl := range imgTypeYAML.Platforms {
ar, ok := rd.arches[pl.Arch.String()]
if !ok {
ar = newArchitecture(rd, pl.Arch.String())
rd.arches[pl.Arch.String()] = ar
}
it := newImageTypeFrom(rd, ar, imgTypeYAML)
if err := ar.addImageType(&pl, it); err != nil {
return nil, err
}
}
}
return rd, nil
}
func (d *distribution) Name() string {
return d.DistroYAML.Name
}
func (d *distribution) Codename() string {
return d.DistroYAML.Codename
}
func (d *distribution) Releasever() string {
return d.DistroYAML.ReleaseVersion
}
func (d *distribution) OsVersion() string {
return d.DistroYAML.OsVersion
}
func (d *distribution) Product() string {
return d.DistroYAML.Product
}
func (d *distribution) ModulePlatformID() string {
return d.DistroYAML.ModulePlatformID
}
func (d *distribution) OSTreeRef() string {
return d.DistroYAML.OSTreeRefTmpl
}
func (d *distribution) ListArches() []string {
archNames := make([]string, 0, len(d.arches))
for name := range d.arches {
archNames = append(archNames, name)
}
sort.Strings(archNames)
return archNames
}
func (d *distribution) GetArch(name string) (distro.Arch, error) {
arch, exists := d.arches[name]
if !exists {
return nil, fmt.Errorf("invalid architecture: %v", name)
}
return arch, nil
}
// architecture implements the distro.Arch interface
var _ = distro.Arch(&architecture{})
type architecture struct {
distro *distribution
name string
imageTypes map[string]distro.ImageType
imageTypeAliases map[string]string
}
func newArchitecture(rd *distribution, name string) *architecture {
return &architecture{
distro: rd,
name: name,
imageTypes: make(map[string]distro.ImageType),
imageTypeAliases: make(map[string]string),
}
}
func (a *architecture) Name() string {
return a.name
}
func (a *architecture) ListImageTypes() []string {
itNames := make([]string, 0, len(a.imageTypes))
for name := range a.imageTypes {
itNames = append(itNames, name)
}
sort.Strings(itNames)
return itNames
}
func (a *architecture) GetImageType(name string) (distro.ImageType, error) {
t, exists := a.imageTypes[name]
if !exists {
aliasForName, exists := a.imageTypeAliases[name]
if !exists {
return nil, fmt.Errorf("invalid image type: %v", name)
}
t, exists = a.imageTypes[aliasForName]
if !exists {
panic(fmt.Sprintf("image type '%s' is an alias to a non-existing image type '%s'", name, aliasForName))
}
}
return t, nil
}
func (a *architecture) addImageType(platform platform.Platform, it imageType) error {
it.arch = a
it.platform = platform
a.imageTypes[it.Name()] = &it
for _, alias := range it.ImageTypeYAML.NameAliases {
if a.imageTypeAliases == nil {
a.imageTypeAliases = map[string]string{}
}
if existingAliasFor, exists := a.imageTypeAliases[alias]; exists {
return fmt.Errorf("image type alias '%s' for '%s' is already defined for another image type '%s'", alias, it.Name(), existingAliasFor)
}
a.imageTypeAliases[alias] = it.Name()
}
return nil
}
func (a *architecture) Distro() distro.Distro {
return a.distro
}
func DistroFactory(idStr string) distro.Distro {
distro, err := newDistro(idStr)
if errors.Is(err, ErrDistroNotFound) {
return nil
}
if err != nil {
panic(err)
}
return distro
}

View file

@ -0,0 +1,48 @@
package generic
import (
"fmt"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro/defs"
)
func newImageTypeFrom(d *distribution, ar *architecture, imgYAML defs.ImageTypeYAML) imageType {
typName := imgYAML.Name()
it := imageType{
ImageTypeYAML: imgYAML,
isoLabel: d.getISOLabelFunc(imgYAML.ISOLabel),
}
it.defaultImageConfig = common.Must(defs.ImageConfig(d.Name(), ar.name, typName))
it.defaultInstallerConfig = common.Must(defs.InstallerConfig(d.Name(), ar.name, typName))
switch imgYAML.Image {
case "disk":
it.image = diskImage
case "container":
it.image = containerImage
case "image_installer":
it.image = imageInstallerImage
case "live_installer":
it.image = liveInstallerImage
case "bootable_container":
it.image = bootableContainerImage
case "iot":
it.image = iotImage
case "iot_commit":
it.image = iotCommitImage
case "iot_container":
it.image = iotContainerImage
case "iot_installer":
it.image = iotInstallerImage
case "iot_simplified_installer":
it.image = iotSimplifiedInstallerImage
case "tar":
it.image = tarImage
default:
err := fmt.Errorf("unknown image func: %v for %v", imgYAML.Image, imgYAML.Name())
panic(err)
}
return it
}

View file

@ -1,10 +1,9 @@
package fedora package generic
import ( import (
"fmt" "fmt"
"math/rand" "math/rand"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/workload" "github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/arch" "github.com/osbuild/images/pkg/arch"
"github.com/osbuild/images/pkg/blueprint" "github.com/osbuild/images/pkg/blueprint"
@ -25,19 +24,12 @@ import (
"github.com/osbuild/images/pkg/rpmmd" "github.com/osbuild/images/pkg/rpmmd"
) )
// HELPERS func osCustomizations(t *imageType, osPackageSet rpmmd.PackageSet, containers []container.SourceSpec, c *blueprint.Customizations) (manifest.OSCustomizations, error) {
func osCustomizations(
t *imageType,
osPackageSet rpmmd.PackageSet,
containers []container.SourceSpec,
c *blueprint.Customizations) (manifest.OSCustomizations, error) {
imageConfig := t.getDefaultImageConfig() imageConfig := t.getDefaultImageConfig()
osc := manifest.OSCustomizations{} osc := manifest.OSCustomizations{}
if t.bootable || t.rpmOstree { if t.ImageTypeYAML.Bootable || t.ImageTypeYAML.RPMOSTree {
osc.KernelName = c.GetKernel().Name osc.KernelName = c.GetKernel().Name
var kernelOptions []string var kernelOptions []string
@ -68,11 +60,13 @@ func osCustomizations(
osc.ExcludeDocs = *imageConfig.ExcludeDocs osc.ExcludeDocs = *imageConfig.ExcludeDocs
} }
if !t.bootISO { if !t.ImageTypeYAML.BootISO {
// don't put users and groups in the payload of an installer // don't put users and groups in the payload of an installer
// add them via kickstart instead // add them via kickstart instead
osc.Groups = users.GroupsFromBP(c.GetGroups()) osc.Groups = users.GroupsFromBP(c.GetGroups())
osc.Users = users.UsersFromBP(c.GetUsers()) osc.Users = users.UsersFromBP(c.GetUsers())
osc.Users = append(osc.Users, imageConfig.Users...)
} }
osc.EnabledServices = imageConfig.EnabledServices osc.EnabledServices = imageConfig.EnabledServices
@ -159,7 +153,7 @@ func osCustomizations(
// deployment, rather than the commit. Therefore the containers need to be // deployment, rather than the commit. Therefore the containers need to be
// stored in a different location, like `/usr/share`, and the container // stored in a different location, like `/usr/share`, and the container
// storage engine configured accordingly. // storage engine configured accordingly.
if t.rpmOstree && len(containers) > 0 { if t.ImageTypeYAML.RPMOSTree && len(containers) > 0 {
storagePath := "/usr/share/containers/storage" storagePath := "/usr/share/containers/storage"
osc.ContainersStorage = &storagePath osc.ContainersStorage = &storagePath
} }
@ -194,7 +188,7 @@ func osCustomizations(
} }
if oscapConfig := c.GetOpenSCAP(); oscapConfig != nil { if oscapConfig := c.GetOpenSCAP(); oscapConfig != nil {
if t.rpmOstree { if t.ImageTypeYAML.RPMOSTree {
panic("unexpected oscap options for ostree image type") panic("unexpected oscap options for ostree image type")
} }
@ -228,7 +222,7 @@ func osCustomizations(
osc.Tmpfilesd = imageConfig.Tmpfilesd osc.Tmpfilesd = imageConfig.Tmpfilesd
osc.PamLimitsConf = imageConfig.PamLimitsConf osc.PamLimitsConf = imageConfig.PamLimitsConf
osc.Sysctld = imageConfig.Sysctld osc.Sysctld = imageConfig.Sysctld
osc.DNFConfig = imageConfig.DNFConfigOptions(t.arch.distro.osVersion) osc.DNFConfig = imageConfig.DNFConfigOptions(t.arch.distro.OsVersion())
osc.SshdConfig = imageConfig.SshdConfig osc.SshdConfig = imageConfig.SshdConfig
osc.AuthConfig = imageConfig.Authconfig osc.AuthConfig = imageConfig.Authconfig
osc.PwQuality = imageConfig.PwQuality osc.PwQuality = imageConfig.PwQuality
@ -261,7 +255,7 @@ func ostreeDeploymentCustomizations(
t *imageType, t *imageType,
c *blueprint.Customizations) (manifest.OSTreeDeploymentCustomizations, error) { c *blueprint.Customizations) (manifest.OSTreeDeploymentCustomizations, error) {
if !t.rpmOstree || !t.bootable { if !t.ImageTypeYAML.RPMOSTree || !t.ImageTypeYAML.Bootable {
return manifest.OSTreeDeploymentCustomizations{}, fmt.Errorf("ostree deployment customizations are only supported for bootable rpm-ostree images") return manifest.OSTreeDeploymentCustomizations{}, fmt.Errorf("ostree deployment customizations are only supported for bootable rpm-ostree images")
} }
@ -349,9 +343,9 @@ func diskImage(workload workload.Workload,
return nil, err return nil, err
} }
img.Environment = t.environment img.Environment = &t.ImageTypeYAML.Environment
img.Workload = workload img.Workload = workload
img.Compression = t.compression img.Compression = t.ImageTypeYAML.Compression
if bp.Minimal { if bp.Minimal {
// Disable weak dependencies if the 'minimal' option is enabled // Disable weak dependencies if the 'minimal' option is enabled
img.OSCustomizations.InstallWeakDeps = false img.OSCustomizations.InstallWeakDeps = false
@ -385,7 +379,7 @@ func tarImage(workload workload.Workload,
return nil, err return nil, err
} }
img.Environment = t.environment img.Environment = &t.ImageTypeYAML.Environment
img.Workload = workload img.Workload = workload
img.Filename = t.Filename() img.Filename = t.Filename()
@ -410,7 +404,7 @@ func containerImage(workload workload.Workload,
return nil, err return nil, err
} }
img.Environment = t.environment img.Environment = &t.ImageTypeYAML.Environment
img.Workload = workload img.Workload = workload
img.Filename = t.Filename() img.Filename = t.Filename()
@ -434,11 +428,11 @@ func liveInstallerImage(workload workload.Workload,
d := t.arch.distro d := t.arch.distro
img.Product = d.product img.Product = d.Product()
img.Variant = "Workstation" img.Variant = "Workstation"
img.OSVersion = d.osVersion img.OSVersion = d.OsVersion()
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion) img.Release = fmt.Sprintf("%s %s", d.DistroYAML.Product, d.OsVersion())
img.Preview = common.VersionGreaterThanOrEqual(img.OSVersion, VERSION_BRANCHED) img.Preview = d.DistroYAML.Preview
var err error var err error
img.ISOLabel, err = t.ISOLabel() img.ISOLabel, err = t.ISOLabel()
@ -461,10 +455,12 @@ func liveInstallerImage(workload workload.Workload,
if err != nil { if err != nil {
return nil, err return nil, err
} }
if installerConfig != nil { if installerConfig != nil {
img.AdditionalDracutModules = append(img.AdditionalDracutModules, installerConfig.AdditionalDracutModules...) img.AdditionalDracutModules = append(img.AdditionalDracutModules, installerConfig.AdditionalDracutModules...)
img.AdditionalDrivers = append(img.AdditionalDrivers, installerConfig.AdditionalDrivers...) img.AdditionalDrivers = append(img.AdditionalDrivers, installerConfig.AdditionalDrivers...)
if installerConfig.SquashfsRootfs != nil && *installerConfig.SquashfsRootfs {
img.RootfsType = manifest.SquashfsRootfs
}
} }
imgConfig := t.getDefaultImageConfig() imgConfig := t.getDefaultImageConfig()
@ -534,22 +530,19 @@ func imageInstallerImage(workload workload.Workload,
if installerConfig != nil { if installerConfig != nil {
img.AdditionalDracutModules = append(img.AdditionalDracutModules, installerConfig.AdditionalDracutModules...) img.AdditionalDracutModules = append(img.AdditionalDracutModules, installerConfig.AdditionalDracutModules...)
img.AdditionalDrivers = append(img.AdditionalDrivers, installerConfig.AdditionalDrivers...) img.AdditionalDrivers = append(img.AdditionalDrivers, installerConfig.AdditionalDrivers...)
if installerConfig.SquashfsRootfs != nil && *installerConfig.SquashfsRootfs {
img.RootfsType = manifest.SquashfsRootfs
}
} }
// On Fedora anaconda needs dbus-broker, but isn't added when dracut runs.
img.AdditionalDracutModules = append(img.AdditionalDracutModules, "dbus-broker")
d := t.arch.distro d := t.arch.distro
img.Product = d.product img.Product = d.DistroYAML.Product
// We don't know the variant that goes into the OS pipeline that gets installed img.OSVersion = d.OsVersion()
img.Variant = "Unknown" img.Release = fmt.Sprintf("%s %s", d.DistroYAML.Product, d.OsVersion())
img.Variant = t.Variant
img.OSVersion = d.osVersion img.Preview = d.DistroYAML.Preview
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Preview = common.VersionGreaterThanOrEqual(img.OSVersion, VERSION_BRANCHED)
img.ISOLabel, err = t.ISOLabel() img.ISOLabel, err = t.ISOLabel()
if err != nil { if err != nil {
@ -609,10 +602,10 @@ func iotCommitImage(workload workload.Workload,
}, },
} }
img.Environment = t.environment img.Environment = &t.ImageTypeYAML.Environment
img.Workload = workload img.Workload = workload
img.OSTreeParent = parentCommit img.OSTreeParent = parentCommit
img.OSVersion = d.osVersion img.OSVersion = d.OsVersion()
img.Filename = t.Filename() img.Filename = t.Filename()
img.InstallWeakDeps = false img.InstallWeakDeps = false
@ -640,15 +633,19 @@ func bootableContainerImage(workload workload.Workload,
return nil, err return nil, err
} }
img.Environment = t.environment img.Environment = &t.ImageTypeYAML.Environment
img.Workload = workload img.Workload = workload
img.OSTreeParent = parentCommit img.OSTreeParent = parentCommit
img.OSVersion = d.osVersion img.OSVersion = d.OsVersion()
img.Filename = t.Filename() img.Filename = t.Filename()
img.InstallWeakDeps = false img.InstallWeakDeps = false
img.BootContainer = true img.BootContainer = true
id, err := distro.ParseID(d.Name())
if err != nil {
return nil, err
}
img.BootcConfig = &bootc.Config{ img.BootcConfig = &bootc.Config{
Filename: "20-fedora.toml", Filename: fmt.Sprintf("20-%s.toml", id.Name),
RootFilesystemType: "ext4", RootFilesystemType: "ext4",
} }
@ -691,10 +688,10 @@ func iotContainerImage(workload workload.Workload,
} }
img.ContainerLanguage = img.OSCustomizations.Language img.ContainerLanguage = img.OSCustomizations.Language
img.Environment = t.environment img.Environment = &t.ImageTypeYAML.Environment
img.Workload = workload img.Workload = workload
img.OSTreeParent = parentCommit img.OSTreeParent = parentCommit
img.OSVersion = d.osVersion img.OSVersion = d.OsVersion()
img.ExtraContainerPackages = packageSets[containerPkgsKey] img.ExtraContainerPackages = packageSets[containerPkgsKey]
img.Filename = t.Filename() img.Filename = t.Filename()
@ -728,8 +725,8 @@ func iotInstallerImage(workload workload.Workload,
return nil, err return nil, err
} }
img.Kickstart.OSTree = &kickstart.OSTree{ img.Kickstart.OSTree = &kickstart.OSTree{
OSName: "fedora-iot", OSName: t.OSTree.Name,
Remote: "fedora-iot", Remote: t.OSTree.Remote,
} }
img.Kickstart.Path = osbuild.KickstartPathOSBuild img.Kickstart.Path = osbuild.KickstartPathOSBuild
img.Kickstart.Language, img.Kickstart.Keyboard = customizations.GetPrimaryLocale() img.Kickstart.Language, img.Kickstart.Keyboard = customizations.GetPrimaryLocale()
@ -760,16 +757,19 @@ func iotInstallerImage(workload workload.Workload,
if installerConfig != nil { if installerConfig != nil {
img.AdditionalDracutModules = append(img.AdditionalDracutModules, installerConfig.AdditionalDracutModules...) img.AdditionalDracutModules = append(img.AdditionalDracutModules, installerConfig.AdditionalDracutModules...)
img.AdditionalDrivers = append(img.AdditionalDrivers, installerConfig.AdditionalDrivers...) img.AdditionalDrivers = append(img.AdditionalDrivers, installerConfig.AdditionalDrivers...)
if installerConfig.SquashfsRootfs != nil && *installerConfig.SquashfsRootfs {
img.RootfsType = manifest.SquashfsRootfs
}
} }
// On Fedora anaconda needs dbus-broker, but isn't added when dracut runs. // On Fedora anaconda needs dbus-broker, but isn't added when dracut runs.
img.AdditionalDracutModules = append(img.AdditionalDracutModules, "dbus-broker") img.AdditionalDracutModules = append(img.AdditionalDracutModules, "dbus-broker")
img.Product = d.product img.Product = d.DistroYAML.Product
img.Variant = "IoT" img.Variant = "IoT"
img.OSVersion = d.osVersion img.OSVersion = d.OsVersion()
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion) img.Release = fmt.Sprintf("%s %s", d.DistroYAML.Product, d.OsVersion())
img.Preview = common.VersionGreaterThanOrEqual(img.OSVersion, VERSION_BRANCHED) img.Preview = d.DistroYAML.Preview
img.ISOLabel, err = t.ISOLabel() img.ISOLabel, err = t.ISOLabel()
if err != nil { if err != nil {
@ -821,9 +821,9 @@ func iotImage(workload workload.Workload,
img.Workload = workload img.Workload = workload
img.Remote = ostree.Remote{ img.Remote = ostree.Remote{
Name: "fedora-iot", Name: t.OSTree.Remote,
} }
img.OSName = "fedora-iot" img.OSName = t.OSTree.Remote
// TODO: move generation into LiveImage // TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations, options, rng) pt, err := t.getPartitionTable(customizations, options, rng)
@ -833,7 +833,7 @@ func iotImage(workload workload.Workload,
img.PartitionTable = pt img.PartitionTable = pt
img.Filename = t.Filename() img.Filename = t.Filename()
img.Compression = t.compression img.Compression = t.ImageTypeYAML.Compression
return img, nil return img, nil
} }
@ -862,9 +862,9 @@ func iotSimplifiedInstallerImage(workload workload.Workload,
rawImg.Platform = t.platform rawImg.Platform = t.platform
rawImg.Workload = workload rawImg.Workload = workload
rawImg.Remote = ostree.Remote{ rawImg.Remote = ostree.Remote{
Name: "fedora-iot", Name: t.OSTree.Remote,
} }
rawImg.OSName = "fedora" rawImg.OSName = t.OSTree.Name
// TODO: move generation into LiveImage // TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations, options, rng) pt, err := t.getPartitionTable(customizations, options, rng)
@ -907,10 +907,10 @@ func iotSimplifiedInstallerImage(workload workload.Workload,
img.AdditionalDracutModules = append(img.AdditionalDracutModules, "dbus-broker") img.AdditionalDracutModules = append(img.AdditionalDracutModules, "dbus-broker")
d := t.arch.distro d := t.arch.distro
img.Product = d.product img.Product = d.DistroYAML.Product
img.Variant = "IoT" img.Variant = "IoT"
img.OSName = "fedora" img.OSName = t.OSTree.Name
img.OSVersion = d.osVersion img.OSVersion = d.OsVersion()
img.ISOLabel, err = t.ISOLabel() img.ISOLabel, err = t.ISOLabel()
if err != nil { if err != nil {

View file

@ -1,4 +1,4 @@
package fedora package generic
import ( import (
"errors" "errors"
@ -9,8 +9,8 @@ import (
"slices" "slices"
"github.com/osbuild/images/internal/common" "github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/internal/workload" "github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/arch"
"github.com/osbuild/images/pkg/blueprint" "github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container" "github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/customizations/oscap" "github.com/osbuild/images/pkg/customizations/oscap"
@ -28,41 +28,29 @@ import (
type imageFunc func(workload workload.Workload, t *imageType, bp *blueprint.Blueprint, options distro.ImageOptions, packageSets map[string]rpmmd.PackageSet, containers []container.SourceSpec, rng *rand.Rand) (image.ImageKind, error) type imageFunc func(workload workload.Workload, t *imageType, bp *blueprint.Blueprint, options distro.ImageOptions, packageSets map[string]rpmmd.PackageSet, containers []container.SourceSpec, rng *rand.Rand) (image.ImageKind, error)
type packageSetFunc func(t *imageType) (map[string]rpmmd.PackageSet, error)
type isoLabelFunc func(t *imageType) string type isoLabelFunc func(t *imageType) string
// imageType implements the distro.ImageType interface
var _ = distro.ImageType(&imageType{})
type imageType struct { type imageType struct {
defs.ImageTypeYAML
arch *architecture arch *architecture
platform platform.Platform platform platform.Platform
environment environment.Environment
// XXX: make definable via YAML
workload workload.Workload workload workload.Workload
name string // XXX: make member function ImageTypeYAML
nameAliases []string
filename string
compression string
mimeType string
packageSets packageSetFunc
defaultImageConfig *distro.ImageConfig defaultImageConfig *distro.ImageConfig
defaultInstallerConfig *distro.InstallerConfig defaultInstallerConfig *distro.InstallerConfig
defaultSize uint64
buildPipelines []string
payloadPipelines []string
exports []string
image imageFunc image imageFunc
isoLabel isoLabelFunc isoLabel isoLabelFunc
// bootISO: installable ISO
bootISO bool
// rpmOstree: iot/ostree
rpmOstree bool
// bootable image
bootable bool
requiredPartitionSizes map[string]uint64
} }
func (t *imageType) Name() string { func (t *imageType) Name() string {
return t.name return t.ImageTypeYAML.Name()
} }
func (t *imageType) Arch() distro.Arch { func (t *imageType) Arch() distro.Arch {
@ -70,24 +58,24 @@ func (t *imageType) Arch() distro.Arch {
} }
func (t *imageType) Filename() string { func (t *imageType) Filename() string {
return t.filename return t.ImageTypeYAML.Filename
} }
func (t *imageType) MIMEType() string { func (t *imageType) MIMEType() string {
return t.mimeType return t.ImageTypeYAML.MimeType
} }
func (t *imageType) OSTreeRef() string { func (t *imageType) OSTreeRef() string {
d := t.arch.distro d := t.arch.distro
if t.rpmOstree { if t.ImageTypeYAML.RPMOSTree {
return fmt.Sprintf(d.ostreeRefTmpl, t.arch.Name()) return fmt.Sprintf(d.OSTreeRef(), t.arch.Name())
} }
return "" return ""
} }
func (t *imageType) ISOLabel() (string, error) { func (t *imageType) ISOLabel() (string, error) {
if !t.bootISO { if !t.ImageTypeYAML.BootISO {
return "", fmt.Errorf("image type %q is not an ISO", t.name) return "", fmt.Errorf("image type %q is not an ISO", t.Name())
} }
if t.isoLabel != nil { if t.isoLabel != nil {
@ -99,21 +87,21 @@ func (t *imageType) ISOLabel() (string, error) {
func (t *imageType) Size(size uint64) uint64 { func (t *imageType) Size(size uint64) uint64 {
// Microsoft Azure requires vhd images to be rounded up to the nearest MB // Microsoft Azure requires vhd images to be rounded up to the nearest MB
if t.name == "vhd" && size%datasizes.MebiByte != 0 { if t.ImageTypeYAML.Name() == "vhd" && size%datasizes.MebiByte != 0 {
size = (size/datasizes.MebiByte + 1) * datasizes.MebiByte size = (size/datasizes.MebiByte + 1) * datasizes.MebiByte
} }
if size == 0 { if size == 0 {
size = t.defaultSize size = t.ImageTypeYAML.DefaultSize
} }
return size return size
} }
func (t *imageType) BuildPipelines() []string { func (t *imageType) BuildPipelines() []string {
return t.buildPipelines return t.ImageTypeYAML.BuildPipelines
} }
func (t *imageType) PayloadPipelines() []string { func (t *imageType) PayloadPipelines() []string {
return t.payloadPipelines return t.ImageTypeYAML.PayloadPipelines
} }
func (t *imageType) PayloadPackageSets() []string { func (t *imageType) PayloadPackageSets() []string {
@ -121,8 +109,8 @@ func (t *imageType) PayloadPackageSets() []string {
} }
func (t *imageType) Exports() []string { func (t *imageType) Exports() []string {
if len(t.exports) > 0 { if len(t.ImageTypeYAML.Exports) > 0 {
return t.exports return t.ImageTypeYAML.Exports
} }
return []string{"assembler"} return []string{"assembler"}
} }
@ -139,14 +127,10 @@ func (t *imageType) BootMode() platform.BootMode {
} }
func (t *imageType) BasePartitionTable() (*disk.PartitionTable, error) { func (t *imageType) BasePartitionTable() (*disk.PartitionTable, error) {
return defs.PartitionTable(t, VersionReplacements()) return defs.PartitionTable(t)
} }
func (t *imageType) getPartitionTable( func (t *imageType) getPartitionTable(customizations *blueprint.Customizations, options distro.ImageOptions, rng *rand.Rand) (*disk.PartitionTable, error) {
customizations *blueprint.Customizations,
options distro.ImageOptions,
rng *rand.Rand,
) (*disk.PartitionTable, error) {
basePartitionTable, err := t.BasePartitionTable() basePartitionTable, err := t.BasePartitionTable()
if err != nil { if err != nil {
return nil, err return nil, err
@ -169,15 +153,15 @@ func (t *imageType) getPartitionTable(
partOptions := &disk.CustomPartitionTableOptions{ partOptions := &disk.CustomPartitionTableOptions{
PartitionTableType: basePartitionTable.Type, // PT type is not customizable, it is determined by the base PT for an image type or architecture PartitionTableType: basePartitionTable.Type, // PT type is not customizable, it is determined by the base PT for an image type or architecture
BootMode: t.BootMode(), BootMode: t.BootMode(),
DefaultFSType: disk.FS_EXT4, // default fs type for Fedora DefaultFSType: t.arch.distro.DefaultFSType,
RequiredMinSizes: t.requiredPartitionSizes, RequiredMinSizes: t.ImageTypeYAML.RequiredPartitionSizes,
Architecture: t.platform.GetArch(), Architecture: t.platform.GetArch(),
} }
return disk.NewCustomPartitionTable(partitioning, partOptions, rng) return disk.NewCustomPartitionTable(partitioning, partOptions, rng)
} }
partitioningMode := options.PartitioningMode partitioningMode := options.PartitioningMode
if t.rpmOstree { if t.ImageTypeYAML.RPMOSTree {
// IoT supports only LVM, force it. // IoT supports only LVM, force it.
// Raw is not supported, return an error if it is requested // Raw is not supported, return an error if it is requested
// TODO Need a central location for logic like this // TODO Need a central location for logic like this
@ -188,7 +172,7 @@ func (t *imageType) getPartitionTable(
} }
mountpoints := customizations.GetFilesystems() mountpoints := customizations.GetFilesystems()
return disk.NewPartitionTable(basePartitionTable, mountpoints, imageSize, partitioningMode, t.platform.GetArch(), t.requiredPartitionSizes, rng) return disk.NewPartitionTable(basePartitionTable, mountpoints, imageSize, partitioningMode, t.platform.GetArch(), t.ImageTypeYAML.RequiredPartitionSizes, rng)
} }
func (t *imageType) getDefaultImageConfig() *distro.ImageConfig { func (t *imageType) getDefaultImageConfig() *distro.ImageConfig {
@ -197,13 +181,13 @@ func (t *imageType) getDefaultImageConfig() *distro.ImageConfig {
if imageConfig == nil { if imageConfig == nil {
imageConfig = &distro.ImageConfig{} imageConfig = &distro.ImageConfig{}
} }
return imageConfig.InheritFrom(t.arch.distro.getDefaultImageConfig()) return imageConfig.InheritFrom(t.arch.distro.defaultImageConfig)
} }
func (t *imageType) getDefaultInstallerConfig() (*distro.InstallerConfig, error) { func (t *imageType) getDefaultInstallerConfig() (*distro.InstallerConfig, error) {
if !t.bootISO { if !t.ImageTypeYAML.BootISO {
return nil, fmt.Errorf("image type %q is not an ISO", t.name) return nil, fmt.Errorf("image type %q is not an ISO", t.Name())
} }
return t.defaultInstallerConfig, nil return t.defaultInstallerConfig, nil
@ -237,8 +221,8 @@ func (t *imageType) Manifest(bp *blueprint.Blueprint,
staticPackageSets := make(map[string]rpmmd.PackageSet) staticPackageSets := make(map[string]rpmmd.PackageSet)
// don't add any static packages if Minimal was selected // don't add any static packages if Minimal was selected
if !bp.Minimal && t.packageSets != nil { if !bp.Minimal {
pkgSets, err := t.packageSets(t) pkgSets, err := defs.PackageSets(t)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
@ -324,7 +308,7 @@ func (t *imageType) Manifest(bp *blueprint.Blueprint,
if options.UseBootstrapContainer { if options.UseBootstrapContainer {
mf.DistroBootstrapRef = bootstrapContainerFor(t) mf.DistroBootstrapRef = bootstrapContainerFor(t)
} }
_, err = img.InstantiateManifest(&mf, repos, t.arch.distro.runner, rng) _, err = img.InstantiateManifest(&mf, repos, &t.arch.distro.DistroYAML.Runner, rng)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
@ -339,13 +323,13 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
var warnings []string var warnings []string
if !t.rpmOstree && options.OSTree != nil { if !t.ImageTypeYAML.RPMOSTree && options.OSTree != nil {
return warnings, fmt.Errorf("OSTree is not supported for %q", t.Name()) return warnings, fmt.Errorf("OSTree is not supported for %q", t.Name())
} }
// we do not support embedding containers on ostree-derived images, only on commits themselves // we do not support embedding containers on ostree-derived images, only on commits themselves
if len(bp.Containers) > 0 && t.rpmOstree && (t.name != "iot-commit" && t.name != "iot-container") { if len(bp.Containers) > 0 && t.ImageTypeYAML.RPMOSTree && (t.Name() != "iot-commit" && t.Name() != "iot-container") {
return warnings, fmt.Errorf("embedding containers is not supported for %s on %s", t.name, t.arch.distro.name) return warnings, fmt.Errorf("embedding containers is not supported for %s on %s", t.Name(), t.arch.distro.Name())
} }
if options.OSTree != nil { if options.OSTree != nil {
@ -354,37 +338,37 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
} }
} }
if t.bootISO && t.rpmOstree { if t.ImageTypeYAML.BootISO && t.ImageTypeYAML.RPMOSTree {
// ostree-based ISOs require a URL from which to pull a payload commit // ostree-based ISOs require a URL from which to pull a payload commit
if options.OSTree == nil || options.OSTree.URL == "" { if options.OSTree == nil || options.OSTree.URL == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying a URL from which to retrieve the OSTree commit", t.name) return warnings, fmt.Errorf("boot ISO image type %q requires specifying a URL from which to retrieve the OSTree commit", t.Name())
} }
} }
if t.name == "iot-raw-xz" || t.name == "iot-qcow2" { if t.Name() == "iot-raw-xz" || t.Name() == "iot-qcow2" {
allowed := []string{"User", "Group", "Directories", "Files", "Services", "FIPS"} allowed := []string{"User", "Group", "Directories", "Files", "Services", "FIPS"}
if err := customizations.CheckAllowed(allowed...); err != nil { if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf(distro.UnsupportedCustomizationError, t.name, strings.Join(allowed, ", ")) return warnings, fmt.Errorf(distro.UnsupportedCustomizationError, t.Name(), strings.Join(allowed, ", "))
} }
// TODO: consider additional checks, such as those in "edge-simplified-installer" in RHEL distros // TODO: consider additional checks, such as those in "edge-simplified-installer" in RHEL distros
} }
// BootISOs have limited support for customizations. // BootISOs have limited support for customizations.
// TODO: Support kernel name selection for image-installer // TODO: Support kernel name selection for image-installer
if t.bootISO { if t.ImageTypeYAML.BootISO {
if t.name == "iot-simplified-installer" { if t.Name() == "iot-simplified-installer" {
allowed := []string{"InstallationDevice", "FDO", "Ignition", "Kernel", "User", "Group", "FIPS"} allowed := []string{"InstallationDevice", "FDO", "Ignition", "Kernel", "User", "Group", "FIPS"}
if err := customizations.CheckAllowed(allowed...); err != nil { if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf(distro.UnsupportedCustomizationError, t.name, strings.Join(allowed, ", ")) return warnings, fmt.Errorf(distro.UnsupportedCustomizationError, t.Name(), strings.Join(allowed, ", "))
} }
if customizations.GetInstallationDevice() == "" { if customizations.GetInstallationDevice() == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying an installation device to install to", t.name) return warnings, fmt.Errorf("boot ISO image type %q requires specifying an installation device to install to", t.Name())
} }
// FDO is optional, but when specified has some restrictions // FDO is optional, but when specified has some restrictions
if customizations.GetFDO() != nil { if customizations.GetFDO() != nil {
if customizations.GetFDO().ManufacturingServerURL == "" { if customizations.GetFDO().ManufacturingServerURL == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying FDO.ManufacturingServerURL configuration to install to when using FDO", t.name) return warnings, fmt.Errorf("boot ISO image type %q requires specifying FDO.ManufacturingServerURL configuration to install to when using FDO", t.Name())
} }
var diunSet int var diunSet int
if customizations.GetFDO().DiunPubKeyHash != "" { if customizations.GetFDO().DiunPubKeyHash != "" {
@ -397,7 +381,7 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
diunSet++ diunSet++
} }
if diunSet != 1 { if diunSet != 1 {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying one of [FDO.DiunPubKeyHash,FDO.DiunPubKeyInsecure,FDO.DiunPubKeyRootCerts] configuration to install to when using FDO", t.name) return warnings, fmt.Errorf("boot ISO image type %q requires specifying one of [FDO.DiunPubKeyHash,FDO.DiunPubKeyInsecure,FDO.DiunPubKeyRootCerts] configuration to install to when using FDO", t.Name())
} }
} }
@ -410,21 +394,21 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
return warnings, fmt.Errorf("ignition.firstboot requires a provisioning url") return warnings, fmt.Errorf("ignition.firstboot requires a provisioning url")
} }
} }
} else if t.name == "iot-installer" || t.name == "minimal-installer" { } else if t.Name() == "iot-installer" || t.Name() == "minimal-installer" {
// "Installer" is actually not allowed for image-installer right now, but this is checked at the end // "Installer" is actually not allowed for image-installer right now, but this is checked at the end
allowed := []string{"User", "Group", "FIPS", "Installer", "Timezone", "Locale"} allowed := []string{"User", "Group", "FIPS", "Installer", "Timezone", "Locale"}
if err := customizations.CheckAllowed(allowed...); err != nil { if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf(distro.UnsupportedCustomizationError, t.name, strings.Join(allowed, ", ")) return warnings, fmt.Errorf(distro.UnsupportedCustomizationError, t.Name(), strings.Join(allowed, ", "))
} }
} else if t.name == "workstation-live-installer" { } else if t.Name() == "workstation-live-installer" {
allowed := []string{"Installer"} allowed := []string{"Installer"}
if err := customizations.CheckAllowed(allowed...); err != nil { if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf(distro.NoCustomizationsAllowedError, t.name) return warnings, fmt.Errorf(distro.NoCustomizationsAllowedError, t.Name())
} }
} }
} }
if kernelOpts := customizations.GetKernel(); kernelOpts.Append != "" && t.rpmOstree { if kernelOpts := customizations.GetKernel(); kernelOpts.Append != "" && t.ImageTypeYAML.RPMOSTree {
return warnings, fmt.Errorf("kernel boot parameter customizations are not supported for ostree types") return warnings, fmt.Errorf("kernel boot parameter customizations are not supported for ostree types")
} }
@ -433,7 +417,7 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
if err != nil { if err != nil {
return warnings, err return warnings, err
} }
if (len(mountpoints) > 0 || partitioning != nil) && t.rpmOstree { if (len(mountpoints) > 0 || partitioning != nil) && t.ImageTypeYAML.RPMOSTree {
return warnings, fmt.Errorf("Custom mountpoints and partitioning are not supported for ostree types") return warnings, fmt.Errorf("Custom mountpoints and partitioning are not supported for ostree types")
} }
if len(mountpoints) > 0 && partitioning != nil { if len(mountpoints) > 0 && partitioning != nil {
@ -451,11 +435,11 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
} }
if osc := customizations.GetOpenSCAP(); osc != nil { if osc := customizations.GetOpenSCAP(); osc != nil {
supported := oscap.IsProfileAllowed(osc.ProfileID, oscapProfileAllowList) supported := oscap.IsProfileAllowed(osc.ProfileID, t.arch.distro.DistroYAML.OscapProfilesAllowList)
if !supported { if !supported {
return warnings, fmt.Errorf("OpenSCAP unsupported profile: %s", osc.ProfileID) return warnings, fmt.Errorf("OpenSCAP unsupported profile: %s", osc.ProfileID)
} }
if t.rpmOstree { if t.ImageTypeYAML.RPMOSTree {
return warnings, fmt.Errorf("OpenSCAP customizations are not supported for ostree types") return warnings, fmt.Errorf("OpenSCAP customizations are not supported for ostree types")
} }
if osc.ProfileID == "" { if osc.ProfileID == "" {
@ -475,7 +459,7 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
dcp := policies.CustomDirectoriesPolicies dcp := policies.CustomDirectoriesPolicies
fcp := policies.CustomFilesPolicies fcp := policies.CustomFilesPolicies
if t.rpmOstree { if t.ImageTypeYAML.RPMOSTree {
dcp = policies.OstreeCustomDirectoriesPolicies dcp = policies.OstreeCustomDirectoriesPolicies
fcp = policies.OstreeCustomFilesPolicies fcp = policies.OstreeCustomFilesPolicies
} }
@ -506,7 +490,7 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
} }
if instCust != nil { if instCust != nil {
// only supported by the Anaconda installer // only supported by the Anaconda installer
if slices.Index([]string{"iot-installer"}, t.name) == -1 { if slices.Index([]string{"iot-installer"}, t.Name()) == -1 {
return warnings, fmt.Errorf("installer customizations are not supported for %q", t.Name()) return warnings, fmt.Errorf("installer customizations are not supported for %q", t.Name())
} }
@ -525,18 +509,7 @@ func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOp
return warnings, nil return warnings, nil
} }
// XXX: this will become part of the yaml distro definitions, i.e.
// the yaml will have a "bootstrap_ref" key for each distro/arch
func bootstrapContainerFor(t *imageType) string { func bootstrapContainerFor(t *imageType) string {
arch := t.arch.Name() a := common.Must(arch.FromString(t.arch.name))
distro := t.arch.distro return t.arch.distro.DistroYAML.BootstrapContainers[a]
// XXX: remove once fedora containers are part of the upstream
// fedora registry (and can be validated via tls)
if arch == "riscv64" {
return "ghcr.io/mvo5/fedora-buildroot:" + distro.OsVersion()
}
// we need fedora-toolbox to get python3
return "registry.fedoraproject.org/fedora-toolbox:" + distro.OsVersion()
} }

View file

@ -16,7 +16,7 @@ type ID struct {
MinorVersion int MinorVersion int
} }
func (id ID) versionString() string { func (id ID) VersionString() string {
if id.MinorVersion == -1 { if id.MinorVersion == -1 {
return fmt.Sprintf("%d", id.MajorVersion) return fmt.Sprintf("%d", id.MajorVersion)
} else { } else {
@ -25,11 +25,11 @@ func (id ID) versionString() string {
} }
func (id ID) String() string { func (id ID) String() string {
return fmt.Sprintf("%s-%s", id.Name, id.versionString()) return fmt.Sprintf("%s-%s", id.Name, id.VersionString())
} }
func (id ID) Version() (*version.Version, error) { func (id ID) Version() (*version.Version, error) {
return version.NewVersion(id.versionString()) return version.NewVersion(id.VersionString())
} }
type ParseError struct { type ParseError struct {

View file

@ -8,6 +8,7 @@ import (
"github.com/osbuild/images/pkg/customizations/fsnode" "github.com/osbuild/images/pkg/customizations/fsnode"
"github.com/osbuild/images/pkg/customizations/shell" "github.com/osbuild/images/pkg/customizations/shell"
"github.com/osbuild/images/pkg/customizations/subscription" "github.com/osbuild/images/pkg/customizations/subscription"
"github.com/osbuild/images/pkg/customizations/users"
"github.com/osbuild/images/pkg/manifest" "github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/osbuild" "github.com/osbuild/images/pkg/osbuild"
) )
@ -29,6 +30,17 @@ type ImageConfig struct {
UpdateDefaultKernel *bool `yaml:"update_default_kernel,omitempty"` UpdateDefaultKernel *bool `yaml:"update_default_kernel,omitempty"`
KernelOptions []string `yaml:"kernel_options,omitempty"` KernelOptions []string `yaml:"kernel_options,omitempty"`
// The name of the default kernel to use for the image type.
// NOTE: Currently this overrides the kernel named in the blueprint. The
// only image type that uses it is the azure-cvm, which doesn't allow
// kernel selection. The option should generally be a fallback for when the
// blueprint doesn't specify a kernel.
//
// This option has no effect on the DefaultKernel option under Sysconfig.
// If both options are set, they should have the same value.
// These two options should be unified.
DefaultKernelName *string `yaml:"default_kernel_name"`
// List of files from which to import GPG keys into the RPM database // List of files from which to import GPG keys into the RPM database
GPGKeyFiles []string `yaml:"gpgkey_files,omitempty"` GPGKeyFiles []string `yaml:"gpgkey_files,omitempty"`
@ -60,8 +72,7 @@ type ImageConfig struct {
PamLimitsConf []*osbuild.PamLimitsConfStageOptions `yaml:"pam_limits_conf,omitempty"` PamLimitsConf []*osbuild.PamLimitsConfStageOptions `yaml:"pam_limits_conf,omitempty"`
Sysctld []*osbuild.SysctldStageOptions Sysctld []*osbuild.SysctldStageOptions
// Do not use DNFConfig directly, call "DNFConfigOptions()" // Do not use DNFConfig directly, call "DNFConfigOptions()"
DNFConfig []*osbuild.DNFConfigStageOptions `yaml:"dnf_config,omitempty"` DNFConfig *DNFConfig `yaml:"dnf_config"`
DNFSetReleaseVerVar *bool `yaml:"dnf_set_release_ver_var,omitempty"`
SshdConfig *osbuild.SshdConfigStageOptions `yaml:"sshd_config"` SshdConfig *osbuild.SshdConfigStageOptions `yaml:"sshd_config"`
Authconfig *osbuild.AuthconfigStageOptions Authconfig *osbuild.AuthconfigStageOptions
PwQuality *osbuild.PwqualityConfStageOptions PwQuality *osbuild.PwqualityConfStageOptions
@ -77,6 +88,8 @@ type ImageConfig struct {
WSLConfig *WSLConfig `yaml:"wsl_config,omitempty"` WSLConfig *WSLConfig `yaml:"wsl_config,omitempty"`
Users []users.User
Files []*fsnode.File Files []*fsnode.File
Directories []*fsnode.Directory Directories []*fsnode.Directory
@ -125,6 +138,11 @@ type ImageConfig struct {
IsoRootfsType *manifest.RootfsType `yaml:"iso_rootfs_type,omitempty"` IsoRootfsType *manifest.RootfsType `yaml:"iso_rootfs_type,omitempty"`
} }
type DNFConfig struct {
Options []*osbuild.DNFConfigStageOptions
SetReleaseVerVar *bool `yaml:"set_release_ver_var"`
}
type WSLConfig struct { type WSLConfig struct {
BootSystemd bool `yaml:"boot_systemd,omitempty"` BootSystemd bool `yaml:"boot_systemd,omitempty"`
} }
@ -155,8 +173,11 @@ func (c *ImageConfig) InheritFrom(parentConfig *ImageConfig) *ImageConfig {
} }
func (c *ImageConfig) DNFConfigOptions(osVersion string) []*osbuild.DNFConfigStageOptions { func (c *ImageConfig) DNFConfigOptions(osVersion string) []*osbuild.DNFConfigStageOptions {
if c.DNFSetReleaseVerVar == nil || !*c.DNFSetReleaseVerVar { if c.DNFConfig == nil {
return c.DNFConfig return nil
}
if c.DNFConfig.SetReleaseVerVar == nil || !*c.DNFConfig.SetReleaseVerVar {
return c.DNFConfig.Options
} }
// We currently have no use-case where we set both a custom // We currently have no use-case where we set both a custom
@ -166,7 +187,7 @@ func (c *ImageConfig) DNFConfigOptions(osVersion string) []*osbuild.DNFConfigSta
// existing once (exactly once) and we need to consider what to // existing once (exactly once) and we need to consider what to
// do about potentially conflicting (manually set) "releasever" // do about potentially conflicting (manually set) "releasever"
// values by the user. // values by the user.
if c.DNFConfig != nil { if c.DNFConfig.SetReleaseVerVar != nil && c.DNFConfig.Options != nil {
err := fmt.Errorf("internal error: currently DNFConfig and DNFSetReleaseVerVar cannot be used together, please reporting this as a feature request") err := fmt.Errorf("internal error: currently DNFConfig and DNFSetReleaseVerVar cannot be used together, please reporting this as a feature request")
panic(err) panic(err)
} }

View file

@ -6,4 +6,7 @@ type InstallerConfig struct {
// Additional dracut modules and drivers to enable // Additional dracut modules and drivers to enable
AdditionalDracutModules []string `yaml:"additional_dracut_modules"` AdditionalDracutModules []string `yaml:"additional_dracut_modules"`
AdditionalDrivers []string `yaml:"additional_drivers"` AdditionalDrivers []string `yaml:"additional_drivers"`
// SquashfsRootfs will set SquashfsRootfs as rootfs in the iso image
SquashfsRootfs *bool `yaml:"squashfs_rootfs"`
} }

View file

@ -37,7 +37,16 @@ func osCustomizations(
osc := manifest.OSCustomizations{} osc := manifest.OSCustomizations{}
if t.Bootable || t.RPMOSTree { if t.Bootable || t.RPMOSTree {
// TODO: for now the only image types that define a default kernel are
// ones that use UKIs and don't allow overriding, so this works.
// However, if we ever need to specify default kernels for image types
// that allow overriding, we will need to change c.GetKernel() to take
// an argument as fallback or make it not return the standard "kernel"
// when it's unset.
osc.KernelName = c.GetKernel().Name osc.KernelName = c.GetKernel().Name
if imageConfig.DefaultKernelName != nil {
osc.KernelName = *imageConfig.DefaultKernelName
}
var kernelOptions []string var kernelOptions []string
// XXX: keep in sync with the identical copy in fedora/images.go // XXX: keep in sync with the identical copy in fedora/images.go
@ -74,7 +83,9 @@ func osCustomizations(
// don't put users and groups in the payload of an installer // don't put users and groups in the payload of an installer
// add them via kickstart instead // add them via kickstart instead
osc.Groups = users.GroupsFromBP(c.GetGroups()) osc.Groups = users.GroupsFromBP(c.GetGroups())
osc.Users = users.UsersFromBP(c.GetUsers()) osc.Users = users.UsersFromBP(c.GetUsers())
osc.Users = append(osc.Users, imageConfig.Users...)
} }
osc.EnabledServices = imageConfig.EnabledServices osc.EnabledServices = imageConfig.EnabledServices

View file

@ -69,3 +69,24 @@ func mkAzureSapInternalImgType(rd *rhel.Distribution, a arch.Arch) *rhel.ImageTy
return it return it
} }
// Azure Confidential VM
func mkAzureCVMImgType(rd *rhel.Distribution) *rhel.ImageType {
it := rhel.NewImageType(
"azure-cvm",
"disk.vhd",
"application/x-vhd",
packageSetLoader,
rhel.DiskImage,
[]string{"build"},
[]string{"os", "image", "vpc"},
[]string{"vpc"},
)
it.Bootable = true
it.DefaultSize = 32 * datasizes.GibiByte
it.DefaultImageConfig = imageConfig(rd, "x86_64", "azure-cvm")
it.BasePartitionTables = defaultBasePartitionTables
return it
}

View file

@ -253,6 +253,18 @@ func newDistro(name string, major, minor int) *rhel.Distribution {
x86_64.AddImageTypes(ec2X86Platform, mkEc2ImgTypeX86_64(rd), mkEc2HaImgTypeX86_64(rd), mkEC2SapImgTypeX86_64(rd)) x86_64.AddImageTypes(ec2X86Platform, mkEc2ImgTypeX86_64(rd), mkEc2HaImgTypeX86_64(rd), mkEC2SapImgTypeX86_64(rd))
aarch64.AddImageTypes(ec2Aarch64Platform, mkEC2ImgTypeAarch64(rd)) aarch64.AddImageTypes(ec2Aarch64Platform, mkEC2ImgTypeAarch64(rd))
azureX64CVMPlatform := &platform.X86{
UEFIVendor: rd.Vendor(),
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
Bootloader: platform.BOOTLOADER_UKI,
}
x86_64.AddImageTypes(
azureX64CVMPlatform,
mkAzureCVMImgType(rd),
)
} }
rd.AddArches(x86_64, aarch64, ppc64le, s390x) rd.AddArches(x86_64, aarch64, ppc64le, s390x)

View file

@ -11,9 +11,9 @@ import (
) )
func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) { func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) {
return defs.PackageSets(t, nil) return defs.PackageSets(t)
} }
func imageConfig(d *rhel.Distribution, archName, imageType string) *distro.ImageConfig { func imageConfig(d *rhel.Distribution, archName, imageType string) *distro.ImageConfig {
return common.Must(defs.ImageConfig(d.Name(), archName, imageType, nil)) return common.Must(defs.ImageConfig(d.Name(), archName, imageType))
} }

View file

@ -9,7 +9,7 @@ import (
) )
func defaultBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) { func defaultBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) {
partitionTable, err := defs.PartitionTable(t, nil) partitionTable, err := defs.PartitionTable(t)
if errors.Is(err, defs.ErrNoPartitionTableForImgType) { if errors.Is(err, defs.ErrNoPartitionTableForImgType) {
return disk.PartitionTable{}, false return disk.PartitionTable{}, false
} }

View file

@ -7,5 +7,5 @@ import (
) )
func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) { func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) {
return defs.PackageSets(t, nil) return defs.PackageSets(t)
} }

View file

@ -9,7 +9,7 @@ import (
) )
func defaultBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) { func defaultBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) {
partitionTable, err := defs.PartitionTable(t, nil) partitionTable, err := defs.PartitionTable(t)
if errors.Is(err, defs.ErrNoPartitionTableForImgType) { if errors.Is(err, defs.ErrNoPartitionTableForImgType) {
return disk.PartitionTable{}, false return disk.PartitionTable{}, false
} }

View file

@ -76,7 +76,8 @@ func defaultGceByosImageConfig(rd distro.Distro) *distro.ImageConfig {
Keyboard: &osbuild.KeymapStageOptions{ Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us", Keymap: "us",
}, },
DNFConfig: []*osbuild.DNFConfigStageOptions{ DNFConfig: &distro.DNFConfig{
Options: []*osbuild.DNFConfigStageOptions{
{ {
Config: &osbuild.DNFConfig{ Config: &osbuild.DNFConfig{
Main: &osbuild.DNFConfigMain{ Main: &osbuild.DNFConfigMain{
@ -85,6 +86,7 @@ func defaultGceByosImageConfig(rd distro.Distro) *distro.ImageConfig {
}, },
}, },
}, },
},
DNFAutomaticConfig: &osbuild.DNFAutomaticConfigStageOptions{ DNFAutomaticConfig: &osbuild.DNFAutomaticConfigStageOptions{
Config: &osbuild.DNFAutomaticConfig{ Config: &osbuild.DNFAutomaticConfig{
Commands: &osbuild.DNFAutomaticConfigCommands{ Commands: &osbuild.DNFAutomaticConfigCommands{

View file

@ -9,5 +9,5 @@ import (
) )
func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) { func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) {
return defs.PackageSets(t, nil) return defs.PackageSets(t)
} }

View file

@ -9,7 +9,7 @@ import (
) )
func partitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) { func partitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) {
partitionTable, err := defs.PartitionTable(t, nil) partitionTable, err := defs.PartitionTable(t)
if errors.Is(err, defs.ErrNoPartitionTableForImgType) { if errors.Is(err, defs.ErrNoPartitionTableForImgType) {
return disk.PartitionTable{}, false return disk.PartitionTable{}, false
} }

View file

@ -107,7 +107,10 @@ func sapImageConfig(rd distro.Distro) *distro.ImageConfig {
if common.VersionLessThan(rd.OsVersion(), "8.10") { if common.VersionLessThan(rd.OsVersion(), "8.10") {
// E4S/EUS // E4S/EUS
ic.DNFSetReleaseVerVar = common.ToPtr(true) if ic.DNFConfig == nil {
ic.DNFConfig = &distro.DNFConfig{}
}
ic.DNFConfig.SetReleaseVerVar = common.ToPtr(true)
} }
return ic return ic

View file

@ -77,6 +77,27 @@ func mkAzureSapInternalImgType(rd *rhel.Distribution, a arch.Arch) *rhel.ImageTy
return it return it
} }
// Azure Confidential VM
func mkAzureCVMImgType(rd *rhel.Distribution) *rhel.ImageType {
it := rhel.NewImageType(
"azure-cvm",
"disk.vhd",
"application/x-vhd",
packageSetLoader,
rhel.DiskImage,
[]string{"build"},
[]string{"os", "image", "vpc"},
[]string{"vpc"},
)
it.Bootable = true
it.DefaultSize = 32 * datasizes.GibiByte
it.DefaultImageConfig = azureCVMImageConfig(rd)
it.BasePartitionTables = azureCVMPartitionTables
return it
}
// PARTITION TABLES // PARTITION TABLES
func azureInternalBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) { func azureInternalBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) {
var bootSize uint64 var bootSize uint64
@ -327,8 +348,9 @@ func defaultAzureKernelOptions(rd *rhel.Distribution, a arch.Arch) []string {
return kargs return kargs
} }
// based on https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_microsoft_azure/assembly_deploying-a-rhel-image-as-a-virtual-machine-on-microsoft-azure_cloud-content-azure#making-configuration-changes_configure-the-image-azure // Base ImageConfig for Azure images. Should not be used directly since the
func defaultAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig { // default ImageConfig adds a few extra bits.
func baseAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig {
ic := &distro.ImageConfig{ ic := &distro.ImageConfig{
Timezone: common.ToPtr("Etc/UTC"), Timezone: common.ToPtr("Etc/UTC"),
Locale: common.ToPtr("en_US.UTF-8"), Locale: common.ToPtr("en_US.UTF-8"),
@ -389,34 +411,6 @@ func defaultAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig {
}, },
}, },
}, },
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "10-azure-kvp.cfg",
Config: osbuild.CloudInitConfigFile{
Reporting: &osbuild.CloudInitConfigReporting{
Logging: &osbuild.CloudInitConfigReportingHandlers{
Type: "log",
},
Telemetry: &osbuild.CloudInitConfigReportingHandlers{
Type: "hyperv",
},
},
},
},
{
Filename: "91-azure_datasource.cfg",
Config: osbuild.CloudInitConfigFile{
Datasource: &osbuild.CloudInitConfigDatasource{
Azure: &osbuild.CloudInitConfigDatasourceAzure{
ApplyNetworkConfig: false,
},
},
DatasourceList: []string{
"Azure",
},
},
},
},
PwQuality: &osbuild.PwqualityConfStageOptions{ PwQuality: &osbuild.PwqualityConfStageOptions{
Config: osbuild.PwqualityConfConfig{ Config: osbuild.PwqualityConfConfig{
Minlen: common.ToPtr(6), Minlen: common.ToPtr(6),
@ -498,17 +492,6 @@ func defaultAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig {
ic.WAAgentConfig.Config.ProvisioningUseCloudInit = common.ToPtr(true) ic.WAAgentConfig.Config.ProvisioningUseCloudInit = common.ToPtr(true)
ic.WAAgentConfig.Config.ProvisioningEnabled = common.ToPtr(false) ic.WAAgentConfig.Config.ProvisioningEnabled = common.ToPtr(false)
ic.TimeSynchronization = &osbuild.ChronyStageOptions{
Refclocks: []osbuild.ChronyConfigRefclock{
{
Driver: osbuild.NewChronyDriverPHC("/dev/ptp_hyperv"),
Poll: common.ToPtr(3),
Dpoll: common.ToPtr(-2),
Offset: common.ToPtr(0.0),
},
},
}
datalossWarningScript, datalossSystemdUnit, err := rhel.CreateAzureDatalossWarningScriptAndUnit() datalossWarningScript, datalossSystemdUnit, err := rhel.CreateAzureDatalossWarningScriptAndUnit()
if err != nil { if err != nil {
panic(err) panic(err)
@ -533,6 +516,118 @@ func defaultAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig {
return ic return ic
} }
// based on https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_microsoft_azure/assembly_deploying-a-rhel-image-as-a-virtual-machine-on-microsoft-azure_cloud-content-azure#making-configuration-changes_configure-the-image-azure
func defaultAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig {
ic := &distro.ImageConfig{
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "10-azure-kvp.cfg",
Config: osbuild.CloudInitConfigFile{
Reporting: &osbuild.CloudInitConfigReporting{
Logging: &osbuild.CloudInitConfigReportingHandlers{
Type: "log",
},
Telemetry: &osbuild.CloudInitConfigReportingHandlers{
Type: "hyperv",
},
},
},
},
{
Filename: "91-azure_datasource.cfg",
Config: osbuild.CloudInitConfigFile{
Datasource: &osbuild.CloudInitConfigDatasource{
Azure: &osbuild.CloudInitConfigDatasourceAzure{
ApplyNetworkConfig: false,
},
},
DatasourceList: []string{
"Azure",
},
},
},
},
}
if rd.IsRHEL() && common.VersionGreaterThanOrEqual(rd.OsVersion(), "9.6") {
ic.TimeSynchronization = &osbuild.ChronyStageOptions{
Refclocks: []osbuild.ChronyConfigRefclock{
{
Driver: osbuild.NewChronyDriverPHC("/dev/ptp_hyperv"),
Poll: common.ToPtr(3),
Dpoll: common.ToPtr(-2),
Offset: common.ToPtr(0.0),
},
},
}
}
return ic.InheritFrom(baseAzureImageConfig(rd))
}
func sapAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig { func sapAzureImageConfig(rd *rhel.Distribution) *distro.ImageConfig {
return sapImageConfig(rd.OsVersion()).InheritFrom(defaultAzureImageConfig(rd)) return sapImageConfig(rd.OsVersion()).InheritFrom(defaultAzureImageConfig(rd))
} }
func azureCVMImageConfig(rd *rhel.Distribution) *distro.ImageConfig {
ic := &distro.ImageConfig{
DefaultKernelName: common.ToPtr("kernel-uki-virt"),
NoBLS: common.ToPtr(true),
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "91-azure_datasource.cfg",
Config: osbuild.CloudInitConfigFile{
Datasource: &osbuild.CloudInitConfigDatasource{
Azure: &osbuild.CloudInitConfigDatasourceAzure{
ApplyNetworkConfig: false,
},
},
DatasourceList: []string{
"Azure",
},
},
},
},
}
return ic.InheritFrom(baseAzureImageConfig(rd))
}
func azureCVMPartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) {
switch t.Arch().Name() {
case arch.ARCH_X86_64.String():
return disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: disk.PT_GPT,
Partitions: []disk.Partition{
{
Size: 252 * datasizes.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 5 * datasizes.GibiByte,
Type: disk.RootPartitionX86_64GUID,
Payload: &disk.Filesystem{
Type: "ext4",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
}, true
default:
return disk.PartitionTable{}, false
}
}

View file

@ -346,6 +346,22 @@ func newDistro(name string, major, minor int) *rhel.Distribution {
}, },
mkEC2ImgTypeAarch64(), mkEC2ImgTypeAarch64(),
) )
// CVM is only available starting from 9.6
if common.VersionGreaterThanOrEqual(rd.OsVersion(), "9.6") {
azureX64CVMPlatform := &platform.X86{
UEFIVendor: rd.Vendor(),
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
Bootloader: platform.BOOTLOADER_UKI,
}
x86_64.AddImageTypes(
azureX64CVMPlatform,
mkAzureCVMImgType(rd),
)
}
} }
rd.AddArches(x86_64, aarch64, ppc64le, s390x) rd.AddArches(x86_64, aarch64, ppc64le, s390x)

View file

@ -60,7 +60,8 @@ func baseGCEImageConfig() *distro.ImageConfig {
Keyboard: &osbuild.KeymapStageOptions{ Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us", Keymap: "us",
}, },
DNFConfig: []*osbuild.DNFConfigStageOptions{ DNFConfig: &distro.DNFConfig{
Options: []*osbuild.DNFConfigStageOptions{
{ {
Config: &osbuild.DNFConfig{ Config: &osbuild.DNFConfig{
Main: &osbuild.DNFConfigMain{ Main: &osbuild.DNFConfigMain{
@ -69,6 +70,7 @@ func baseGCEImageConfig() *distro.ImageConfig {
}, },
}, },
}, },
},
DNFAutomaticConfig: &osbuild.DNFAutomaticConfigStageOptions{ DNFAutomaticConfig: &osbuild.DNFAutomaticConfigStageOptions{
Config: &osbuild.DNFAutomaticConfig{ Config: &osbuild.DNFAutomaticConfig{
Commands: &osbuild.DNFAutomaticConfigCommands{ Commands: &osbuild.DNFAutomaticConfigCommands{

View file

@ -208,5 +208,15 @@ func checkOptions(t *rhel.ImageType, bp *blueprint.Blueprint, options distro.Ima
} }
} }
// don't support setting any kernel customizations for image types with
// UKIs
// NOTE: this is very ugly and stupid, it should not be based on the image
// type name, but we want to redo this whole function anyway
// NOTE: we can't use customizations.GetKernel() because it returns
// 'Name: "kernel"' when unset.
if t.Name() == "azure-cvm" && customizations != nil && customizations.Kernel != nil {
return warnings, fmt.Errorf("kernel customizations are not supported for %q", t.Name())
}
return warnings, nil return warnings, nil
} }

View file

@ -7,5 +7,5 @@ import (
) )
func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) { func packageSetLoader(t *rhel.ImageType) (map[string]rpmmd.PackageSet, error) {
return defs.PackageSets(t, nil) return defs.PackageSets(t)
} }

View file

@ -7,7 +7,7 @@ import (
) )
func defaultBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) { func defaultBasePartitionTables(t *rhel.ImageType) (disk.PartitionTable, bool) {
partitionTable, err := defs.PartitionTable(t, nil) partitionTable, err := defs.PartitionTable(t)
if err != nil { if err != nil {
// XXX: have a check to differenciate ErrNoEnt and else // XXX: have a check to differenciate ErrNoEnt and else
return disk.PartitionTable{}, false return disk.PartitionTable{}, false

View file

@ -104,6 +104,8 @@ func sapImageConfig(osVersion string) *distro.ImageConfig {
), ),
}, },
// E4S/EUS // E4S/EUS
DNFSetReleaseVerVar: common.ToPtr(true), DNFConfig: &distro.DNFConfig{
SetReleaseVerVar: common.ToPtr(true),
},
} }
} }

View file

@ -5,7 +5,7 @@ import (
"sort" "sort"
"github.com/osbuild/images/pkg/distro" "github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distro/fedora" "github.com/osbuild/images/pkg/distro/generic"
"github.com/osbuild/images/pkg/distro/rhel/rhel10" "github.com/osbuild/images/pkg/distro/rhel/rhel10"
"github.com/osbuild/images/pkg/distro/rhel/rhel7" "github.com/osbuild/images/pkg/distro/rhel/rhel7"
"github.com/osbuild/images/pkg/distro/rhel/rhel8" "github.com/osbuild/images/pkg/distro/rhel/rhel8"
@ -109,7 +109,7 @@ func New(factories ...FactoryFunc) *Factory {
// distros. // distros.
func NewDefault() *Factory { func NewDefault() *Factory {
return New( return New(
fedora.DistroFactory, generic.DistroFactory,
rhel7.DistroFactory, rhel7.DistroFactory,
rhel8.DistroFactory, rhel8.DistroFactory,
rhel9.DistroFactory, rhel9.DistroFactory,

View file

@ -2,7 +2,6 @@ package distroidparser
import ( import (
"github.com/osbuild/images/pkg/distro" "github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distro/fedora"
"github.com/osbuild/images/pkg/distro/rhel/rhel10" "github.com/osbuild/images/pkg/distro/rhel/rhel10"
"github.com/osbuild/images/pkg/distro/rhel/rhel7" "github.com/osbuild/images/pkg/distro/rhel/rhel7"
"github.com/osbuild/images/pkg/distro/rhel/rhel8" "github.com/osbuild/images/pkg/distro/rhel/rhel8"
@ -63,7 +62,6 @@ func (p *Parser) Standardize(idStr string) (string, error) {
func NewDefaultParser() *Parser { func NewDefaultParser() *Parser {
return New( return New(
fedora.ParseID,
rhel7.ParseID, rhel7.ParseID,
rhel8.ParseID, rhel8.ParseID,
rhel9.ParseID, rhel9.ParseID,

View file

@ -10,6 +10,7 @@ import (
"github.com/osbuild/images/pkg/customizations/anaconda" "github.com/osbuild/images/pkg/customizations/anaconda"
"github.com/osbuild/images/pkg/customizations/kickstart" "github.com/osbuild/images/pkg/customizations/kickstart"
"github.com/osbuild/images/pkg/datasizes" "github.com/osbuild/images/pkg/datasizes"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/manifest" "github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/osbuild" "github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform" "github.com/osbuild/images/pkg/platform"
@ -58,6 +59,9 @@ type AnacondaContainerInstaller struct {
// Locale for the installer. This should be set to the same locale as the // Locale for the installer. This should be set to the same locale as the
// ISO OS payload, if known. // ISO OS payload, if known.
Locale string Locale string
// Filesystem type for the installed system as opposed to that of the ISO.
InstallRootfsType disk.FSType
} }
func NewAnacondaContainerInstaller(container container.SourceSpec, ref string) *AnacondaContainerInstaller { func NewAnacondaContainerInstaller(container container.SourceSpec, ref string) *AnacondaContainerInstaller {
@ -150,6 +154,8 @@ func (img *AnacondaContainerInstaller) InstantiateManifest(m *manifest.Manifest,
isoTreePipeline.KernelOpts = append(isoTreePipeline.KernelOpts, "fips=1") isoTreePipeline.KernelOpts = append(isoTreePipeline.KernelOpts, "fips=1")
} }
isoTreePipeline.InstallRootfsType = img.InstallRootfsType
isoPipeline := manifest.NewISO(buildPipeline, isoTreePipeline, img.ISOLabel) isoPipeline := manifest.NewISO(buildPipeline, isoTreePipeline, img.ISOLabel)
isoPipeline.SetFilename(img.Filename) isoPipeline.SetFilename(img.Filename)
isoPipeline.ISOBoot = img.ISOBoot isoPipeline.ISOBoot = img.ISOBoot

View file

@ -60,12 +60,54 @@ func (img *BootcDiskImage) InstantiateManifestFromContainers(m *manifest.Manifes
if img.BuildSELinux != "" { if img.BuildSELinux != "" {
policy = img.BuildSELinux policy = img.BuildSELinux
} }
var copyFilesFrom map[string][]string
var ensureDirs []*fsnode.Directory
if *img.ContainerSource != *img.BuildContainerSource {
// If we're using a different build container from the target container then we copy
// the bootc customization file directories from the target container. This includes the
// bootc install customization, and /usr/lib/ostree/prepare-root.conf which configures
// e.g. composefs and fs-verity setup.
//
// To ensure that these copies never fail we also create the source and target
// directories as needed.
pipelineName := "target"
// files to copy have slash at end to copy directory contents, not directory itself
copyFiles := []string{"/usr/lib/bootc/install/", "/usr/lib/ostree/"}
ensureDirPaths := []string{"/usr/lib/bootc/install", "/usr/lib/ostree"}
copyFilesFrom = map[string][]string{pipelineName: copyFiles}
for _, path := range ensureDirPaths {
// Note: Mode/User/Group must be nil here to make GenDirectoryNodesStages use dirExistOk
dir, err := fsnode.NewDirectory(path, nil, nil, nil, true)
if err != nil {
return err
}
ensureDirs = append(ensureDirs, dir)
}
targetContainers := []container.SourceSpec{*img.ContainerSource}
targetBuildPipeline := manifest.NewBuildFromContainer(m, runner, targetContainers,
&manifest.BuildOptions{
PipelineName: pipelineName,
ContainerBuildable: true,
SELinuxPolicy: policy,
EnsureDirs: ensureDirs,
})
targetBuildPipeline.Checkpoint()
}
buildContainers := []container.SourceSpec{*img.BuildContainerSource} buildContainers := []container.SourceSpec{*img.BuildContainerSource}
buildPipeline := manifest.NewBuildFromContainer(m, runner, buildContainers, buildPipeline := manifest.NewBuildFromContainer(m, runner, buildContainers,
&manifest.BuildOptions{ &manifest.BuildOptions{
ContainerBuildable: true, ContainerBuildable: true,
SELinuxPolicy: policy, SELinuxPolicy: policy,
CopyFilesFrom: copyFilesFrom,
EnsureDirs: ensureDirs,
}) })
buildPipeline.Checkpoint() buildPipeline.Checkpoint()
// In the bootc flow, we reuse the host container context for tools; // In the bootc flow, we reuse the host container context for tools;

View file

@ -80,6 +80,17 @@ func (img *DiskImage) InstantiateManifest(m *manifest.Manifest,
qcow2Pipeline := manifest.NewQCOW2(buildPipeline, rawImagePipeline) qcow2Pipeline := manifest.NewQCOW2(buildPipeline, rawImagePipeline)
qcow2Pipeline.Compat = img.Platform.GetQCOW2Compat() qcow2Pipeline.Compat = img.Platform.GetQCOW2Compat()
imagePipeline = qcow2Pipeline imagePipeline = qcow2Pipeline
case platform.FORMAT_VAGRANT_LIBVIRT:
qcow2Pipeline := manifest.NewQCOW2(buildPipeline, rawImagePipeline)
qcow2Pipeline.Compat = img.Platform.GetQCOW2Compat()
vagrantPipeline := manifest.NewVagrant(buildPipeline, qcow2Pipeline)
tarPipeline := manifest.NewTar(buildPipeline, vagrantPipeline, "archive")
tarPipeline.Format = osbuild.TarArchiveFormatUstar
tarPipeline.SetFilename(img.Filename)
imagePipeline = tarPipeline
case platform.FORMAT_VHD: case platform.FORMAT_VHD:
vpcPipeline := manifest.NewVPC(buildPipeline, rawImagePipeline) vpcPipeline := manifest.NewVPC(buildPipeline, rawImagePipeline)
vpcPipeline.ForceSize = img.VPCForceSize vpcPipeline.ForceSize = img.VPCForceSize

View file

@ -105,6 +105,8 @@ type AnacondaInstallerISOTree struct {
// Pipeline object where subscription-related files are created for copying // Pipeline object where subscription-related files are created for copying
// onto the ISO. // onto the ISO.
SubscriptionPipeline *Subscription SubscriptionPipeline *Subscription
InstallRootfsType disk.FSType
} }
func NewAnacondaInstallerISOTree(buildPipeline Build, anacondaPipeline *AnacondaInstaller, rootfsPipeline *ISORootfsImg, bootTreePipeline *EFIBootTree) *AnacondaInstallerISOTree { func NewAnacondaInstallerISOTree(buildPipeline Build, anacondaPipeline *AnacondaInstaller, rootfsPipeline *ISORootfsImg, bootTreePipeline *EFIBootTree) *AnacondaInstallerISOTree {
@ -552,6 +554,19 @@ func (p *AnacondaInstallerISOTree) bootcInstallerKickstartStages() []*osbuild.St
panic(fmt.Sprintf("failed to create kickstart stage options: %v", err)) panic(fmt.Sprintf("failed to create kickstart stage options: %v", err))
} }
// Workaround for lack of --target-imgref in Anaconda, xref https://github.com/osbuild/images/issues/380
kickstartOptions.Post = append(kickstartOptions.Post, osbuild.PostOptions{
ErrorOnFail: true,
Commands: []string{
fmt.Sprintf("bootc switch --mutate-in-place --transport registry %s", p.containerSpec.LocalName),
"# used during automatic image testing as finished marker",
"if [ -c /dev/ttyS0 ]; then",
" # continue on errors here, because we used to omit --erroronfail",
` echo "Install finished" > /dev/ttyS0 || true`,
"fi",
},
})
// kickstart.New() already validates the options but they may have been // kickstart.New() already validates the options but they may have been
// modified since then, so validate them before we create the stages // modified since then, so validate them before we create the stages
if err := p.Kickstart.Validate(); err != nil { if err := p.Kickstart.Validate(); err != nil {
@ -612,33 +627,32 @@ func (p *AnacondaInstallerISOTree) bootcInstallerKickstartStages() []*osbuild.St
stages = append(stages, osbuild.NewKickstartStage(kickstartOptions)) stages = append(stages, osbuild.NewKickstartStage(kickstartOptions))
// and what we can't do in a separate kickstart that we include
targetContainerTransport := "registry"
// Because osbuild core only supports a subset of options, we append to the // Because osbuild core only supports a subset of options, we append to the
// base here with some more hardcoded defaults // base here with some more hardcoded defaults
// that should very likely become configurable. // that should very likely become configurable.
hardcodedKickstartBits := ` var hardcodedKickstartBits string
reqpart --add-boot
part swap --fstype=swap --size=1024 // using `autopart` because `part / --fstype=btrfs` didn't work
part / --fstype=ext4 --grow rootFsType := p.InstallRootfsType
if rootFsType == disk.FS_NONE {
// if the rootfs type is not set, we default to ext4
rootFsType = disk.FS_EXT4
}
switch rootFsType {
case disk.FS_BTRFS:
hardcodedKickstartBits = `
autopart --nohome --type=btrfs
`
default:
hardcodedKickstartBits = fmt.Sprintf(`
autopart --nohome --type=plain --fstype=%s
`, rootFsType.String())
}
hardcodedKickstartBits += `
reboot --eject reboot --eject
` `
// Workaround for lack of --target-imgref in Anaconda, xref https://github.com/osbuild/images/issues/380
hardcodedKickstartBits += fmt.Sprintf(`%%post --erroronfail
bootc switch --mutate-in-place --transport %s %s
# used during automatic image testing as finished marker
if [ -c /dev/ttyS0 ]; then
# continue on errors here, because we used to omit --erroronfail
echo "Install finished" > /dev/ttyS0 || true
fi
%%end
`, targetContainerTransport, p.containerSpec.LocalName)
kickstartFile, err := kickstartOptions.IncludeRaw(hardcodedKickstartBits) kickstartFile, err := kickstartOptions.IncludeRaw(hardcodedKickstartBits)
if err != nil { if err != nil {
panic(err) panic(err)
@ -733,11 +747,11 @@ func (p *AnacondaInstallerISOTree) makeKickstartStages(stageOptions *osbuild.Kic
} }
} }
if sudoersPost := makeKickstartSudoersPost(kickstartOptions.SudoNopasswd); sudoersPost != nil {
stageOptions.Post = append(stageOptions.Post, *sudoersPost)
}
stages = append(stages, osbuild.NewKickstartStage(stageOptions)) stages = append(stages, osbuild.NewKickstartStage(stageOptions))
hardcodedKickstartBits := ""
hardcodedKickstartBits += makeKickstartSudoersPost(kickstartOptions.SudoNopasswd)
if p.SubscriptionPipeline != nil { if p.SubscriptionPipeline != nil {
subscriptionPath := "/subscription" subscriptionPath := "/subscription"
stages = append(stages, osbuild.NewMkdirStage(&osbuild.MkdirStageOptions{Paths: []osbuild.MkdirStagePath{{Path: subscriptionPath, Parents: true, ExistOk: true}}})) stages = append(stages, osbuild.NewMkdirStage(&osbuild.MkdirStageOptions{Paths: []osbuild.MkdirStagePath{{Path: subscriptionPath, Parents: true, ExistOk: true}}}))
@ -757,7 +771,7 @@ func (p *AnacondaInstallerISOTree) makeKickstartStages(stageOptions *osbuild.Kic
systemPath = "/mnt/sysroot" systemPath = "/mnt/sysroot"
} }
hardcodedKickstartBits += makeKickstartSubscriptionPost(subscriptionPath, systemPath) stageOptions.Post = append(stageOptions.Post, makeKickstartSubscriptionPost(subscriptionPath, systemPath)...)
// include a readme file on the ISO in the subscription path to explain what it's for // include a readme file on the ISO in the subscription path to explain what it's for
subscriptionReadme, err := fsnode.NewFile( subscriptionReadme, err := fsnode.NewFile(
@ -773,16 +787,6 @@ This directory contains files necessary for registering the system on first boot
p.Files = append(p.Files, subscriptionReadme) p.Files = append(p.Files, subscriptionReadme)
} }
if hardcodedKickstartBits != "" {
// Because osbuild core only supports a subset of options,
// we append to the base here with hardcoded wheel group with NOPASSWD option
kickstartFile, err := stageOptions.IncludeRaw(hardcodedKickstartBits)
if err != nil {
panic(err)
}
p.Files = append(p.Files, kickstartFile)
}
stages = append(stages, osbuild.GenFileNodesStages(p.Files)...) stages = append(stages, osbuild.GenFileNodesStages(p.Files)...)
return stages return stages
@ -795,44 +799,43 @@ func makeISORootPath(p string) string {
return fmt.Sprintf("file://%s", fullpath) return fmt.Sprintf("file://%s", fullpath)
} }
func makeKickstartSudoersPost(names []string) string { func makeKickstartSudoersPost(names []string) *osbuild.PostOptions {
if len(names) == 0 { if len(names) == 0 {
return "" return nil
} }
echoLineFmt := `echo -e "%[1]s\tALL=(ALL)\tNOPASSWD: ALL" > "/etc/sudoers.d/%[1]s" echoLineFmt := `echo -e "%[1]s\tALL=(ALL)\tNOPASSWD: ALL" > "/etc/sudoers.d/%[1]s"`
chmod 0440 /etc/sudoers.d/%[1]s` chmodLineFmt := `chmod 0440 /etc/sudoers.d/%[1]s`
filenames := make(map[string]bool) filenames := make(map[string]bool)
sort.Strings(names) sort.Strings(names)
entries := make([]string, 0, len(names)) post := &osbuild.PostOptions{}
for _, name := range names { for _, name := range names {
if filenames[name] { if filenames[name] {
continue continue
} }
entries = append(entries, fmt.Sprintf(echoLineFmt, name)) post.Commands = append(post.Commands,
fmt.Sprintf(echoLineFmt, name),
fmt.Sprintf(chmodLineFmt, name),
)
filenames[name] = true filenames[name] = true
} }
kickstartSudoersPost := ` post.Commands = append(post.Commands, "restorecon -rvF /etc/sudoers.d")
%%post return post
%s
restorecon -rvF /etc/sudoers.d
%%end
`
return fmt.Sprintf(kickstartSudoersPost, strings.Join(entries, "\n"))
} }
func makeKickstartSubscriptionPost(source, dest string) string { func makeKickstartSubscriptionPost(source, dest string) []osbuild.PostOptions {
// we need to use --nochroot so the command can access files on the ISO
fullSourcePath := filepath.Join("/run/install/repo", source, "etc/*") fullSourcePath := filepath.Join("/run/install/repo", source, "etc/*")
kickstartSubscriptionPost := ` return []osbuild.PostOptions{
%%post --nochroot {
cp -r %s %s // we need to use --nochroot so the command can access files on the ISO
%%end NoChroot: true,
%%post Commands: []string{
systemctl enable osbuild-subscription-register.service fmt.Sprintf("cp -r %s %s", fullSourcePath, dest),
%%end },
` },
return fmt.Sprintf(kickstartSubscriptionPost, fullSourcePath, dest) {
Commands: []string{"systemctl enable osbuild-subscription-register.service"},
},
}
} }

View file

@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"github.com/osbuild/images/pkg/container" "github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/customizations/fsnode"
"github.com/osbuild/images/pkg/osbuild" "github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd" "github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/runner" "github.com/osbuild/images/pkg/runner"
@ -63,6 +64,15 @@ type BuildOptions struct {
// build pipeline. This is only needed when doing cross-arch // build pipeline. This is only needed when doing cross-arch
// building // building
BootstrapPipeline Build BootstrapPipeline Build
// In some cases we have multiple build pipelines
PipelineName string
// Copy in files from other pipeline
CopyFilesFrom map[string][]string
// Ensure directories exist
EnsureDirs []*fsnode.Directory
} }
// policy or default returns the selinuxPolicy or (if unset) the // policy or default returns the selinuxPolicy or (if unset) the
@ -82,6 +92,9 @@ func NewBuild(m *Manifest, runner runner.Runner, repos []rpmmd.RepoConfig, opts
} }
name := "build" name := "build"
if opts.PipelineName != "" {
name = opts.PipelineName
}
pipeline := &BuildrootFromPackages{ pipeline := &BuildrootFromPackages{
Base: NewBase(name, opts.BootstrapPipeline), Base: NewBase(name, opts.BootstrapPipeline),
runner: runner, runner: runner,
@ -199,6 +212,9 @@ type BuildrootFromContainer struct {
containerBuildable bool containerBuildable bool
disableSelinux bool disableSelinux bool
selinuxPolicy string selinuxPolicy string
copyFilesFrom map[string][]string
ensureDirs []*fsnode.Directory
} }
// NewBuildFromContainer creates a new build pipeline from the given // NewBuildFromContainer creates a new build pipeline from the given
@ -209,6 +225,9 @@ func NewBuildFromContainer(m *Manifest, runner runner.Runner, containerSources [
} }
name := "build" name := "build"
if opts.PipelineName != "" {
name = opts.PipelineName
}
pipeline := &BuildrootFromContainer{ pipeline := &BuildrootFromContainer{
Base: NewBase(name, opts.BootstrapPipeline), Base: NewBase(name, opts.BootstrapPipeline),
runner: runner, runner: runner,
@ -218,6 +237,9 @@ func NewBuildFromContainer(m *Manifest, runner runner.Runner, containerSources [
containerBuildable: opts.ContainerBuildable, containerBuildable: opts.ContainerBuildable,
disableSelinux: opts.DisableSELinux, disableSelinux: opts.DisableSELinux,
selinuxPolicy: policyOrDefault(opts.SELinuxPolicy), selinuxPolicy: policyOrDefault(opts.SELinuxPolicy),
copyFilesFrom: opts.CopyFilesFrom,
ensureDirs: opts.EnsureDirs,
} }
m.addPipeline(pipeline) m.addPipeline(pipeline)
return pipeline return pipeline
@ -288,6 +310,27 @@ func (p *BuildrootFromContainer) serialize() osbuild.Pipeline {
panic(err) panic(err)
} }
pipeline.AddStage(stage) pipeline.AddStage(stage)
for _, stage := range osbuild.GenDirectoryNodesStages(p.ensureDirs) {
pipeline.AddStage(stage)
}
for copyFilesFrom, copyFiles := range p.copyFilesFrom {
inputName := "copy-tree"
paths := []osbuild.CopyStagePath{}
for _, copyPath := range copyFiles {
paths = append(paths, osbuild.CopyStagePath{
From: fmt.Sprintf("input://%s%s", inputName, copyPath),
To: fmt.Sprintf("tree://%s", copyPath),
})
}
pipeline.AddStage(osbuild.NewCopyStageSimple(
&osbuild.CopyStageOptions{Paths: paths},
osbuild.NewPipelineTreeInputs(inputName, copyFilesFrom),
))
}
if !p.disableSelinux { if !p.disableSelinux {
pipeline.AddStage(osbuild.NewSELinuxStage( pipeline.AddStage(osbuild.NewSELinuxStage(
&osbuild.SELinuxStageOptions{ &osbuild.SELinuxStageOptions{

View file

@ -400,6 +400,13 @@ func (p *OS) getBuildPackages(distro Distro) []string {
packages = append(packages, tomlPkgsFor(distro)...) packages = append(packages, tomlPkgsFor(distro)...)
} }
if p.platform.GetBootloader() == platform.BOOTLOADER_UKI {
// Only required if the hmac stage is added, which depends on the
// version of uki-direct. Add it conditioned just on the bootloader
// type for now, until we find a better way to decide.
packages = append(packages, "libkcapi-hmaccalc")
}
return packages return packages
} }
@ -495,6 +502,15 @@ func (p *OS) serialize() osbuild.Pipeline {
// https://github.com/osbuild/images/issues/624 // https://github.com/osbuild/images/issues/624
rpmOptions.DisableDracut = true rpmOptions.DisableDracut = true
} }
if p.platform.GetBootloader() == platform.BOOTLOADER_UKI && p.PartitionTable != nil {
espMountpoint, err := findESPMountpoint(p.PartitionTable)
if err != nil {
panic(err)
}
rpmOptions.KernelInstallEnv = &osbuild.KernelInstallEnv{
BootRoot: espMountpoint,
}
}
pipeline.AddStage(osbuild.NewRPMStage(rpmOptions, osbuild.NewRpmStageSourceFilesInputs(p.packageSpecs))) pipeline.AddStage(osbuild.NewRPMStage(rpmOptions, osbuild.NewRpmStageSourceFilesInputs(p.packageSpecs)))
if !p.OSCustomizations.NoBLS { if !p.OSCustomizations.NoBLS {
@ -707,70 +723,32 @@ func (p *OS) serialize() osbuild.Pipeline {
} }
pipeline.AddStages(fsCfgStages...) pipeline.AddStages(fsCfgStages...)
var bootloader *osbuild.Stage switch p.platform.GetBootloader() {
switch p.platform.GetArch() { case platform.BOOTLOADER_GRUB2:
case arch.ARCH_S390X: pipeline.AddStage(grubStage(p, pt, kernelOptions))
bootloader = osbuild.NewZiplStage(new(osbuild.ZiplStageOptions)) if !p.OSCustomizations.KernelOptionsBootloader {
default:
if p.OSCustomizations.NoBLS {
// BLS entries not supported: use grub2.legacy
id := "76a22bf4-f153-4541-b6c7-0332c0dfaeac"
product := osbuild.GRUB2Product{
Name: p.OSProduct,
Version: p.OSVersion,
Nick: p.OSNick,
}
_, err := rpmmd.GetVerStrFromPackageSpecList(p.packageSpecs, "dracut-config-rescue")
hasRescue := err == nil
bootloader = osbuild.NewGrub2LegacyStage(
osbuild.NewGrub2LegacyStageOptions(
p.OSCustomizations.Grub2Config,
p.PartitionTable,
kernelOptions,
p.platform.GetBIOSPlatform(),
p.platform.GetUEFIVendor(),
osbuild.MakeGrub2MenuEntries(id, p.kernelVer, product, hasRescue),
),
)
} else {
options := osbuild.NewGrub2StageOptions(pt,
strings.Join(kernelOptions, " "),
p.kernelVer,
p.platform.GetUEFIVendor() != "",
p.platform.GetBIOSPlatform(),
p.platform.GetUEFIVendor(), false)
// Avoid a race condition because Grub2Config may be shared when set (yay pointers!)
if p.OSCustomizations.Grub2Config != nil {
// Make a COPY of it
cfg := *p.OSCustomizations.Grub2Config
// TODO: don't store Grub2Config in OSPipeline, making the overrides unnecessary
// grub2.Config.Default is owned and set by `NewGrub2StageOptionsUnified`
// and thus we need to preserve it
if options.Config != nil {
cfg.Default = options.Config.Default
}
// Point to the COPY with the possibly new Default value
options.Config = &cfg
}
if p.OSCustomizations.KernelOptionsBootloader {
options.WriteCmdLine = nil
if options.UEFI != nil {
options.UEFI.Unified = false
}
}
bootloader = osbuild.NewGRUB2Stage(options)
}
}
pipeline.AddStage(bootloader)
if !p.OSCustomizations.KernelOptionsBootloader || p.platform.GetArch() == arch.ARCH_S390X {
pipeline = prependKernelCmdlineStage(pipeline, rootUUID, kernelOptions) pipeline = prependKernelCmdlineStage(pipeline, rootUUID, kernelOptions)
} }
case platform.BOOTLOADER_ZIPL:
pipeline.AddStage(osbuild.NewZiplStage(new(osbuild.ZiplStageOptions)))
pipeline = prependKernelCmdlineStage(pipeline, rootUUID, kernelOptions)
case platform.BOOTLOADER_UKI:
espMountpoint, err := findESPMountpoint(pt)
if err != nil {
panic(err)
}
csvfile, err := ukiBootCSVfile(espMountpoint, p.platform.GetArch(), p.kernelVer, p.platform.GetUEFIVendor())
if err != nil {
panic(err)
}
p.addInlineDataAndStages(&pipeline, []*fsnode.File{csvfile})
stages, err := maybeAddHMACandDirStage(p.packageSpecs, espMountpoint, p.kernelVer)
if err != nil {
panic(err)
}
pipeline.AddStages(stages...)
}
} }
if p.OSCustomizations.RHSMFacts != nil { if p.OSCustomizations.RHSMFacts != nil {
@ -992,6 +970,166 @@ func usersFirstBootOptions(users []users.User) *osbuild.FirstBootStageOptions {
return options return options
} }
func grubStage(p *OS, pt *disk.PartitionTable, kernelOptions []string) *osbuild.Stage {
if p.OSCustomizations.NoBLS {
// BLS entries not supported: use grub2.legacy
id := "76a22bf4-f153-4541-b6c7-0332c0dfaeac"
product := osbuild.GRUB2Product{
Name: p.OSProduct,
Version: p.OSVersion,
Nick: p.OSNick,
}
_, err := rpmmd.GetVerStrFromPackageSpecList(p.packageSpecs, "dracut-config-rescue")
hasRescue := err == nil
return osbuild.NewGrub2LegacyStage(
osbuild.NewGrub2LegacyStageOptions(
p.OSCustomizations.Grub2Config,
p.PartitionTable,
kernelOptions,
p.platform.GetBIOSPlatform(),
p.platform.GetUEFIVendor(),
osbuild.MakeGrub2MenuEntries(id, p.kernelVer, product, hasRescue),
),
)
} else {
options := osbuild.NewGrub2StageOptions(pt,
strings.Join(kernelOptions, " "),
p.kernelVer,
p.platform.GetUEFIVendor() != "",
p.platform.GetBIOSPlatform(),
p.platform.GetUEFIVendor(), false)
// Avoid a race condition because Grub2Config may be shared when set (yay pointers!)
if p.OSCustomizations.Grub2Config != nil {
// Make a COPY of it
cfg := *p.OSCustomizations.Grub2Config
// TODO: don't store Grub2Config in OSPipeline, making the overrides unnecessary
// grub2.Config.Default is owned and set by `NewGrub2StageOptionsUnified`
// and thus we need to preserve it
if options.Config != nil {
cfg.Default = options.Config.Default
}
// Point to the COPY with the possibly new Default value
options.Config = &cfg
}
if p.OSCustomizations.KernelOptionsBootloader {
options.WriteCmdLine = nil
if options.UEFI != nil {
options.UEFI.Unified = false
}
}
return osbuild.NewGRUB2Stage(options)
}
}
// ukiBootCSVfile creates a file node for the csv file in the ESP which
// controls the fallback boot to the UKI.
// NOTE: This is a temporary workaround. We expect that the kernel-bootcfg
// command from the python3-virt-firmware package will gain the ability to
// write these files offline during the RHEL 9.7 / 10.1 development cycle.
func ukiBootCSVfile(espMountpoint string, architecture arch.Arch, kernelVer, vendor string) (*fsnode.File, error) {
shortArch := ""
switch architecture {
case arch.ARCH_AARCH64:
shortArch = "aa64"
case arch.ARCH_X86_64:
shortArch = "x64"
default:
return nil, fmt.Errorf("ukiBootCSVfile: UKIs are only supported for x86_64 and aarch64")
}
kernelFilename := fmt.Sprintf("ffffffffffffffffffffffffffffffff-%s.efi", kernelVer)
data := fmt.Sprintf("shim%s.efi,%s,\\EFI\\Linux\\%s ,UKI bootentry\n", shortArch, vendor, kernelFilename)
csvPath := filepath.Join(espMountpoint, "EFI", vendor, fmt.Sprintf("BOOT%s.CSV", strings.ToUpper(shortArch)))
return fsnode.NewFile(csvPath, nil, nil, nil, common.EncodeUTF16le(data))
}
func findESPMountpoint(pt *disk.PartitionTable) (string, error) {
// the ESP in our images is always at /boot/efi, but let's make this more
// flexible and future proof by finding the ESP mountpoint from the
// partition table
espMountpoint := ""
_ = pt.ForEachMountable(func(mnt disk.Mountable, path []disk.Entity) error {
parent := path[1]
if partition, ok := parent.(*disk.Partition); ok {
// ESP filesystem parent must be a plain partition
if partition.Type != disk.EFISystemPartitionGUID {
return nil
}
// found ESP filesystem
espMountpoint = mnt.GetMountpoint()
}
return nil
})
if espMountpoint == "" {
return "", fmt.Errorf("failed to find mountpoint for ESP when generating boot CSV file")
}
return espMountpoint, nil
}
// The 99-uki-uefi-setup.install script, prior to v25.3 of the uki-direct
// package, would run `bootctl -p` to discover the ESP [1]. This doesn't work
// in osbuild because the system isn't booted or live. Since v25.3, the install
// script respects the $BOOT_ROOT env var that we set in osbuild during the
// org.osbuild.rpm stage.
//
// This function takes care of the main issue we have with the older version of
// the script: it generates the .hmac file for the kernel in the ESP.
// A less critical issue is that the .extra.d/ directory for kernel addons
// isn't created, so this function will also return a mkdir stage for that
// directory.
// The updated package is expected to be released in RHEL 9.7 and 10.1 (and
// will possibly be backported to RHEL 9.6 and 10.0).
//
// The function will return nil if the uki-direct package includes the fix
// (v25.3+) or if the uki-direct package is not included in the package list.
//
// [1] https://gitlab.com/kraxel/virt-firmware/-/commit/ca385db4f74a4d542455b9d40c91c8448c7be90c
func maybeAddHMACandDirStage(packages []rpmmd.PackageSpec, espMountpoint, kernelVer string) ([]*osbuild.Stage, error) {
ukiDirectVer, err := rpmmd.GetVerStrFromPackageSpecList(packages, "uki-direct")
if err != nil {
// the uki-direct package isn't in the list: no override necessary
return nil, nil
}
// The GetVerStrFromPackageSpecList function returns
// <version>-<release>.<arch>. For the real package version, this doesn't
// appear to cause any issues with the version parser used by
// VersionLessThan. If a mock depsolver is used this can cause issues
// (Malformed version: 0-8.fk1.x86_64). Make sure we only use the <version>
// component to avoid issues.
ukiDirectVer = strings.SplitN(ukiDirectVer, "-", 2)[0]
if common.VersionLessThan(ukiDirectVer, "25.3") {
// generate hmac file using stage
kernelFilename := fmt.Sprintf("ffffffffffffffffffffffffffffffff-%s.efi", kernelVer)
kernelPath := filepath.Join(espMountpoint, "EFI", "Linux", kernelFilename)
hmacStage := osbuild.NewHMACStage(&osbuild.HMACStageOptions{
Paths: []string{kernelPath},
Algorithm: "sha512",
})
addonsPath := kernelPath + ".extra.d"
addonsDir, err := fsnode.NewDirectory(addonsPath, nil, nil, nil, true)
if err != nil {
return nil, err
}
dirStages := osbuild.GenDirectoryNodesStages([]*fsnode.Directory{addonsDir})
return append([]*osbuild.Stage{hmacStage}, dirStages...), nil
}
// package is new enough
return nil, nil
}
func (p *OS) Platform() platform.Platform { func (p *OS) Platform() platform.Platform {
return p.platform return p.platform
} }

View file

@ -131,6 +131,11 @@ func (p *OSTreeDeployment) getBuildPackages(Distro) []string {
packages := []string{ packages := []string{
"rpm-ostree", "rpm-ostree",
} }
if len(p.Users) > 0 {
packages = append(packages, "shadow-utils")
}
return packages return packages
} }

View file

@ -11,6 +11,7 @@ type Tar struct {
filename string filename string
Format osbuild.TarArchiveFormat Format osbuild.TarArchiveFormat
Compression osbuild.TarArchiveCompression
RootNode osbuild.TarRootNode RootNode osbuild.TarRootNode
Paths []string Paths []string
ACLs *bool ACLs *bool
@ -53,6 +54,7 @@ func (p *Tar) serialize() osbuild.Pipeline {
tarOptions := &osbuild.TarStageOptions{ tarOptions := &osbuild.TarStageOptions{
Filename: p.Filename(), Filename: p.Filename(),
Format: p.Format, Format: p.Format,
Compression: p.Compression,
ACLs: p.ACLs, ACLs: p.ACLs,
SELinux: p.SELinux, SELinux: p.SELinux,
Xattrs: p.Xattrs, Xattrs: p.Xattrs,

View file

@ -0,0 +1,58 @@
package manifest
import (
"github.com/osbuild/images/pkg/artifact"
"github.com/osbuild/images/pkg/osbuild"
)
type Vagrant struct {
Base
filename string
imgPipeline FilePipeline
}
func (p Vagrant) Filename() string {
return p.filename
}
func (p *Vagrant) SetFilename(filename string) {
p.filename = filename
}
func NewVagrant(buildPipeline Build, imgPipeline FilePipeline) *Vagrant {
p := &Vagrant{
Base: NewBase("vagrant", buildPipeline),
imgPipeline: imgPipeline,
filename: "image.box",
}
if buildPipeline != nil {
buildPipeline.addDependent(p)
} else {
imgPipeline.Manifest().addPipeline(p)
}
return p
}
func (p *Vagrant) serialize() osbuild.Pipeline {
pipeline := p.Base.serialize()
pipeline.AddStage(osbuild.NewVagrantStage(
osbuild.NewVagrantStageOptions(osbuild.VagrantProviderLibvirt),
osbuild.NewVagrantStagePipelineFilesInputs(p.imgPipeline.Name(), p.imgPipeline.Filename()),
))
return pipeline
}
func (p *Vagrant) getBuildPackages(Distro) []string {
return []string{"qemu-img"}
}
func (p *Vagrant) Export() *artifact.Artifact {
p.Base.export = true
mimeType := "application/x-qemu-disk"
return artifact.New(p.Name(), p.Filename(), &mimeType)
}

View file

@ -38,6 +38,7 @@ type KickstartStageOptions struct {
AutoPart *AutoPartOptions `json:"autopart,omitempty"` AutoPart *AutoPartOptions `json:"autopart,omitempty"`
Network []NetworkOptions `json:"network,omitempty"` Network []NetworkOptions `json:"network,omitempty"`
Bootloader *BootloaderOptions `json:"bootloader,omitempty"` Bootloader *BootloaderOptions `json:"bootloader,omitempty"`
Post []PostOptions `json:"%post,omitempty"`
} }
type BootloaderOptions struct { type BootloaderOptions struct {
@ -118,6 +119,14 @@ type RootPasswordOptions struct {
Password string `json:"password,omitempty"` Password string `json:"password,omitempty"`
} }
type PostOptions struct {
ErrorOnFail bool `json:"erroronfail,omitempty"`
Interpreter string `json:"interpreter,omitempty"`
Log string `json:"log,omitempty"`
NoChroot bool `json:"nochroot,omitempty"`
Commands []string `json:"commands"`
}
func (KickstartStageOptions) isStageOptions() {} func (KickstartStageOptions) isStageOptions() {}
// Creates an Anaconda kickstart file // Creates an Anaconda kickstart file

View file

@ -23,6 +23,9 @@ type RPMStageOptions struct {
// Create the '/run/ostree-booted' marker // Create the '/run/ostree-booted' marker
OSTreeBooted *bool `json:"ostree_booted,omitempty"` OSTreeBooted *bool `json:"ostree_booted,omitempty"`
// Set environment variables understood by kernel-install and plugins (kernel-install(8))
KernelInstallEnv *KernelInstallEnv `json:"kernel_install_env,omitempty"`
} }
type Exclude struct { type Exclude struct {
@ -30,6 +33,12 @@ type Exclude struct {
Docs bool `json:"docs,omitempty"` Docs bool `json:"docs,omitempty"`
} }
type KernelInstallEnv struct {
// Sets $BOOT_ROOT for kernel-install to override
// $KERNEL_INSTALL_BOOT_ROOT, the installation location for boot entries
BootRoot string `json:"boot_root,omitempty"`
}
// RPMPackage represents one RPM, as referenced by its content hash // RPMPackage represents one RPM, as referenced by its content hash
// (checksum). The files source must indicate where to fetch the given // (checksum). The files source must indicate where to fetch the given
// RPM. If CheckGPG is `true` the RPM must be signed with one of the // RPM. If CheckGPG is `true` the RPM must be signed with one of the

View file

@ -13,6 +13,18 @@ const (
TarArchiveFormatV7 TarArchiveFormat = "v7" TarArchiveFormatV7 TarArchiveFormat = "v7"
) )
type TarArchiveCompression string
// valid values for the 'compression' Tar stage option
const (
// `auto` means based on filename
TarArchiveCompressionAuto TarArchiveCompression = "auto"
TarArchiveCompressionXz TarArchiveCompression = "xz"
TarArchiveCompressionGzip TarArchiveCompression = "gzip"
TarArchiveCompressionZstd TarArchiveCompression = "zstd"
)
type TarRootNode string type TarRootNode string
// valid values for the 'root-node' Tar stage option // valid values for the 'root-node' Tar stage option
@ -28,6 +40,9 @@ type TarStageOptions struct {
// Archive format to use // Archive format to use
Format TarArchiveFormat `json:"format,omitempty"` Format TarArchiveFormat `json:"format,omitempty"`
// Compression to use, defaults to "auto" which is based on filename
Compression TarArchiveCompression `json:"compression,omitempty"`
// Enable support for POSIX ACLs // Enable support for POSIX ACLs
ACLs *bool `json:"acls,omitempty"` ACLs *bool `json:"acls,omitempty"`
@ -70,6 +85,25 @@ func (o TarStageOptions) validate() error {
} }
} }
if o.Compression != "" {
allowedArchiveCompressionValues := []TarArchiveCompression{
TarArchiveCompressionAuto,
TarArchiveCompressionXz,
TarArchiveCompressionGzip,
TarArchiveCompressionZstd,
}
valid := false
for _, value := range allowedArchiveCompressionValues {
if o.Compression == value {
valid = true
break
}
}
if !valid {
return fmt.Errorf("'compression' option does not allow %q as a value", o.Compression)
}
}
if o.RootNode != "" { if o.RootNode != "" {
allowedRootNodeValues := []TarRootNode{ allowedRootNodeValues := []TarRootNode{
TarRootNodeInclude, TarRootNodeInclude,

View file

@ -0,0 +1,54 @@
package osbuild
import (
"fmt"
)
type VagrantProvider string
const (
VagrantProviderLibvirt VagrantProvider = "libvirt"
)
type VagrantStageOptions struct {
Provider VagrantProvider `json:"provider"`
}
func (VagrantStageOptions) isStageOptions() {}
type VagrantStageInputs struct {
Image *FilesInput `json:"image"`
}
func (VagrantStageInputs) isStageInputs() {}
func NewVagrantStage(options *VagrantStageOptions, inputs *VagrantStageInputs) *Stage {
if err := options.validate(); err != nil {
panic(err)
}
return &Stage{
Type: "org.osbuild.vagrant",
Options: options,
Inputs: inputs,
}
}
func NewVagrantStageOptions(provider VagrantProvider) *VagrantStageOptions {
return &VagrantStageOptions{
Provider: provider,
}
}
func (o *VagrantStageOptions) validate() error {
if o.Provider != VagrantProviderLibvirt {
return fmt.Errorf("unknown provider in vagrant stage options %s", o.Provider)
}
return nil
}
func NewVagrantStagePipelineFilesInputs(pipeline, file string) *VagrantStageInputs {
input := NewFilesInput(NewFilesInputPipelineObjectRef(pipeline, file, nil))
return &VagrantStageInputs{Image: input}
}

View file

@ -32,6 +32,10 @@ func (p *Aarch64) GetPackages() []string {
return packages return packages
} }
func (p *Aarch64) GetBootloader() Bootloader {
return BOOTLOADER_GRUB2
}
type Aarch64_Fedora struct { type Aarch64_Fedora struct {
BasePlatform BasePlatform
UEFIVendor string UEFIVendor string
@ -64,3 +68,7 @@ func (p *Aarch64_Fedora) GetPackages() []string {
func (p *Aarch64_Fedora) GetBootFiles() [][2]string { func (p *Aarch64_Fedora) GetBootFiles() [][2]string {
return p.BootFiles return p.BootFiles
} }
func (p *Aarch64_Fedora) GetBootloader() Bootloader {
return BOOTLOADER_GRUB2
}

View file

@ -3,6 +3,7 @@ package platform
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"strings"
"github.com/osbuild/images/internal/common" "github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/arch" "github.com/osbuild/images/pkg/arch"
@ -19,8 +20,47 @@ const ( // image format enum
FORMAT_VHD FORMAT_VHD
FORMAT_GCE FORMAT_GCE
FORMAT_OVA FORMAT_OVA
FORMAT_VAGRANT_LIBVIRT
) )
type Bootloader int
const ( // bootloader enum
BOOTLOADER_NONE Bootloader = iota
BOOTLOADER_GRUB2
BOOTLOADER_ZIPL
BOOTLOADER_UKI
)
func (b *Bootloader) UnmarshalJSON(data []byte) (err error) {
var s string
if err := json.Unmarshal(data, &s); err != nil {
return err
}
*b, err = FromString(s)
return err
}
func (b *Bootloader) UnmarshalYAML(unmarshal func(any) error) error {
return common.UnmarshalYAMLviaJSON(b, unmarshal)
}
func FromString(b string) (Bootloader, error) {
// ignore case
switch strings.ToLower(b) {
case "grub2":
return BOOTLOADER_GRUB2, nil
case "zipl":
return BOOTLOADER_ZIPL, nil
case "uki":
return BOOTLOADER_UKI, nil
case "", "none":
return BOOTLOADER_NONE, nil
default:
return BOOTLOADER_NONE, fmt.Errorf("unsupported bootloader %q", b)
}
}
func (f ImageFormat) String() string { func (f ImageFormat) String() string {
switch f { switch f {
case FORMAT_UNSET: case FORMAT_UNSET:
@ -39,6 +79,8 @@ func (f ImageFormat) String() string {
return "gce" return "gce"
case FORMAT_OVA: case FORMAT_OVA:
return "ova" return "ova"
case FORMAT_VAGRANT_LIBVIRT:
return "vagrant_libvirt"
default: default:
panic(fmt.Errorf("unknown image format %d", f)) panic(fmt.Errorf("unknown image format %d", f))
} }
@ -66,6 +108,8 @@ func (f *ImageFormat) UnmarshalJSON(data []byte) error {
*f = FORMAT_GCE *f = FORMAT_GCE
case "ova": case "ova":
*f = FORMAT_OVA *f = FORMAT_OVA
case "vagrant_libvirt":
*f = FORMAT_VAGRANT_LIBVIRT
default: default:
panic(fmt.Errorf("unknown image format %q", s)) panic(fmt.Errorf("unknown image format %q", s))
} }
@ -86,6 +130,7 @@ type Platform interface {
GetPackages() []string GetPackages() []string
GetBuildPackages() []string GetBuildPackages() []string
GetBootFiles() [][2]string GetBootFiles() [][2]string
GetBootloader() Bootloader
} }
type BasePlatform struct { type BasePlatform struct {
@ -125,3 +170,7 @@ func (p BasePlatform) GetBuildPackages() []string {
func (p BasePlatform) GetBootFiles() [][2]string { func (p BasePlatform) GetBootFiles() [][2]string {
return [][2]string{} return [][2]string{}
} }
func (p BasePlatform) GetBootloader() Bootloader {
return BOOTLOADER_NONE
}

View file

@ -47,3 +47,7 @@ func (p *PPC64LE) GetBuildPackages() []string {
return packages return packages
} }
func (p *PPC64LE) GetBootloader() Bootloader {
return BOOTLOADER_GRUB2
}

View file

@ -40,3 +40,7 @@ func (p *RISCV64) GetBuildPackages() []string {
func (p *RISCV64) GetUEFIVendor() string { func (p *RISCV64) GetUEFIVendor() string {
return p.UEFIVendor return p.UEFIVendor
} }
func (p *RISCV64) GetBootloader() Bootloader {
return BOOTLOADER_GRUB2
}

View file

@ -40,3 +40,7 @@ func (p *S390X) GetBuildPackages() []string {
return packages return packages
} }
func (p *S390X) GetBootloader() Bootloader {
return BOOTLOADER_ZIPL
}

View file

@ -8,6 +8,7 @@ type X86 struct {
BasePlatform BasePlatform
BIOS bool BIOS bool
UEFIVendor string UEFIVendor string
Bootloader Bootloader
} }
func (p *X86) GetArch() arch.Arch { func (p *X86) GetArch() arch.Arch {
@ -28,6 +29,8 @@ func (p *X86) GetUEFIVendor() string {
func (p *X86) GetPackages() []string { func (p *X86) GetPackages() []string {
packages := p.BasePlatform.FirmwarePackages packages := p.BasePlatform.FirmwarePackages
switch p.GetBootloader() {
case BOOTLOADER_GRUB2:
if p.BIOS { if p.BIOS {
packages = append(packages, packages = append(packages,
"dracut-config-generic", "dracut-config-generic",
@ -41,6 +44,14 @@ func (p *X86) GetPackages() []string {
"grub2-efi-x64", "grub2-efi-x64",
"shim-x64") "shim-x64")
} }
case BOOTLOADER_UKI:
packages = append(packages,
"efibootmgr",
"kernel-uki-virt-addons", // provides useful cmdline utilities for the UKI
"shim-x64",
"uki-direct",
)
}
return packages return packages
} }
@ -52,3 +63,10 @@ func (p *X86) GetBuildPackages() []string {
} }
return packages return packages
} }
func (p *X86) GetBootloader() Bootloader {
if p.Bootloader == BOOTLOADER_NONE {
return BOOTLOADER_GRUB2
}
return p.Bootloader
}

View file

@ -20,6 +20,8 @@ type PlatformConf struct {
Packages map[string][]string `yaml:"packages"` Packages map[string][]string `yaml:"packages"`
BuildPackages map[string][]string `yaml:"build_packages"` BuildPackages map[string][]string `yaml:"build_packages"`
BootFiles [][2]string `yaml:"boot_files"` BootFiles [][2]string `yaml:"boot_files"`
Bootloader Bootloader `yaml:"bootloader"`
} }
// ensure PlatformConf implements the Platform interface // ensure PlatformConf implements the Platform interface
@ -60,3 +62,7 @@ func (pc *PlatformConf) GetBuildPackages() []string {
func (pc *PlatformConf) GetBootFiles() [][2]string { func (pc *PlatformConf) GetBootFiles() [][2]string {
return pc.BootFiles return pc.BootFiles
} }
func (pc *PlatformConf) GetBootloader() Bootloader {
return pc.Bootloader
}

17
vendor/github.com/osbuild/images/pkg/runner/yaml.go generated vendored Normal file
View file

@ -0,0 +1,17 @@
package runner
// RunnerConf implements the runner interface
var _ = Runner(&RunnerConf{})
type RunnerConf struct {
Name string `yaml:"name"`
BuildPackages []string `yaml:"build_packages"`
}
func (r *RunnerConf) String() string {
return r.Name
}
func (r *RunnerConf) GetBuildPackages() []string {
return r.BuildPackages
}

10
vendor/modules.txt vendored
View file

@ -416,10 +416,10 @@ github.com/containerd/stargz-snapshotter/estargz/errorutil
# github.com/containerd/typeurl/v2 v2.2.3 # github.com/containerd/typeurl/v2 v2.2.3
## explicit; go 1.21 ## explicit; go 1.21
github.com/containerd/typeurl/v2 github.com/containerd/typeurl/v2
# github.com/containers/common v0.62.0 # github.com/containers/common v0.62.3
## explicit; go 1.22.8 ## explicit; go 1.22.8
github.com/containers/common/pkg/retry github.com/containers/common/pkg/retry
# github.com/containers/image/v5 v5.34.0 # github.com/containers/image/v5 v5.34.3
## explicit; go 1.22.8 ## explicit; go 1.22.8
github.com/containers/image/v5/copy github.com/containers/image/v5/copy
github.com/containers/image/v5/directory github.com/containers/image/v5/directory
@ -504,7 +504,7 @@ github.com/containers/ocicrypt/keywrap/pkcs7
github.com/containers/ocicrypt/spec github.com/containers/ocicrypt/spec
github.com/containers/ocicrypt/utils github.com/containers/ocicrypt/utils
github.com/containers/ocicrypt/utils/keyprovider github.com/containers/ocicrypt/utils/keyprovider
# github.com/containers/storage v1.57.1 # github.com/containers/storage v1.57.2
## explicit; go 1.22.0 ## explicit; go 1.22.0
github.com/containers/storage github.com/containers/storage
github.com/containers/storage/drivers github.com/containers/storage/drivers
@ -1049,7 +1049,7 @@ github.com/oracle/oci-go-sdk/v54/workrequests
## explicit; go 1.22.8 ## explicit; go 1.22.8
github.com/osbuild/blueprint/internal/common github.com/osbuild/blueprint/internal/common
github.com/osbuild/blueprint/pkg/blueprint github.com/osbuild/blueprint/pkg/blueprint
# github.com/osbuild/images v0.148.0 # github.com/osbuild/images v0.151.0
## explicit; go 1.22.8 ## explicit; go 1.22.8
github.com/osbuild/images/data/dependencies github.com/osbuild/images/data/dependencies
github.com/osbuild/images/data/repositories github.com/osbuild/images/data/repositories
@ -1076,7 +1076,7 @@ github.com/osbuild/images/pkg/datasizes
github.com/osbuild/images/pkg/disk github.com/osbuild/images/pkg/disk
github.com/osbuild/images/pkg/distro github.com/osbuild/images/pkg/distro
github.com/osbuild/images/pkg/distro/defs github.com/osbuild/images/pkg/distro/defs
github.com/osbuild/images/pkg/distro/fedora github.com/osbuild/images/pkg/distro/generic
github.com/osbuild/images/pkg/distro/rhel github.com/osbuild/images/pkg/distro/rhel
github.com/osbuild/images/pkg/distro/rhel/rhel10 github.com/osbuild/images/pkg/distro/rhel/rhel10
github.com/osbuild/images/pkg/distro/rhel/rhel7 github.com/osbuild/images/pkg/distro/rhel/rhel7