go.mod: update osbuild/images to v0.168.0
tag v0.165.0 Tagger: imagebuilder-bot <imagebuilder-bots+imagebuilder-bot@redhat.com> Changes with 0.165.0 ---------------- * distro: move rhel9 into a generic distro (osbuild/images#1645) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Simon de Vlieger * Revert "distro: drop `ImageType.BasePartitionTable()`" (osbuild/images#1691) * Author: Michael Vogt, Reviewers: Simon de Vlieger, Tomáš Hozza * Update dependencies 2025-07-20 (osbuild/images#1675) * Author: SchutzBot, Reviewers: Achilleas Koutsou, Simon de Vlieger * defs: add missing `bootstrap_containers` (osbuild/images#1679) * Author: Michael Vogt, Reviewers: Simon de Vlieger, Tomáš Hozza * disk: handle adding `PReP` partition on PPC64/s390x (HMS-8884) (osbuild/images#1681) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Simon de Vlieger * distro: bring per-distro checkOptions back (osbuild/images#1678) * Author: Michael Vogt, Reviewers: Simon de Vlieger, Tomáš Hozza * distro: cleanups in the pkg/distro/generic area (osbuild/images#1686) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Simon de Vlieger * distro: move rhel8 into a generic distro (osbuild/images#1643) * Author: Michael Vogt, Reviewers: Nobody * distro: small followups for PR#1682 (osbuild/images#1689) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Simon de Vlieger, Tomáš Hozza * distro: unify transform/match into a single concept (osbuild/images#1682) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Tomáš Hozza * distros: de-duplicate runner build packages for centos10 (osbuild/images#1680) * Author: Michael Vogt, Reviewers: Simon de Vlieger, Tomáš Hozza * github: disable Go dep updates through dependabot (osbuild/images#1683) * Author: Achilleas Koutsou, Reviewers: Simon de Vlieger, Tomáš Hozza * repos: include almalinux 9.6 (osbuild/images#1677) * Author: Simon de Vlieger, Reviewers: Lukáš Zapletal, Tomáš Hozza * rhel9: wsl distribution config (osbuild/images#1694) * Author: Simon de Vlieger, Reviewers: Michael Vogt, Sanne Raymaekers * test/manifests/all-customizations: don't embed local file via URI (osbuild/images#1684) * Author: Tomáš Hozza, Reviewers: Achilleas Koutsou, Brian C. Lane — Somewhere on the Internet, 2025-07-28 --- tag v0.166.0 Tagger: imagebuilder-bot <imagebuilder-bots+imagebuilder-bot@redhat.com> Changes with 0.166.0 ---------------- * customizations/subscription: conditionally enable semanage call (HMS-8866) (osbuild/images#1673) * Author: Sanne Raymaekers, Reviewers: Achilleas Koutsou, Michael Vogt * distro/rhel-10: versionlock shim-x64 in the azure-cvm image (osbuild/images#1697) * Author: Achilleas Koutsou, Reviewers: Michael Vogt, Simon de Vlieger * manifestmock: move container/pkg/commit mocks into helper (osbuild/images#1700) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Simon de Vlieger * rhel9: `vagrant-libvirt`, `vagrant-virtualbox` (osbuild/images#1693) * Author: Simon de Vlieger, Reviewers: Michael Vogt, Sanne Raymaekers * rhel{9,10}: centos WSL refinement (HMS-8922) (osbuild/images#1690) * Author: Simon de Vlieger, Reviewers: Ondřej Budai, Sanne Raymaekers, Tomáš Hozza — Somewhere on the Internet, 2025-07-29 --- tag v0.167.0 Tagger: imagebuilder-bot <imagebuilder-bots+imagebuilder-bot@redhat.com> Changes with 0.167.0 ---------------- * RHEL/Azure: drop obsolete WAAgentConfig keys [RHEL-93894] and remove loglevel kernel option [RHEL-102372] (osbuild/images#1611) * Author: Achilleas Koutsou, Reviewers: Michael Vogt, Ondřej Budai, Sanne Raymaekers * Update dependencies 2025-07-27 (osbuild/images#1699) * Author: SchutzBot, Reviewers: Achilleas Koutsou, Simon de Vlieger * distro/rhel9: set default_kernel to kernel-uki-virt (osbuild/images#1704) * Author: Achilleas Koutsou, Reviewers: Ondřej Budai, Simon de Vlieger * distro: drop legacy loaders and update tests (osbuild/images#1687) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Tomáš Hozza * distro: fix issues with yaml distro definitions and enable yaml checks (osbuild/images#1702) * Author: Achilleas Koutsou, Reviewers: Michael Vogt, Ondřej Budai, Simon de Vlieger — Somewhere on the Internet, 2025-07-30 --- tag v0.168.0 Tagger: imagebuilder-bot <imagebuilder-bots+imagebuilder-bot@redhat.com> Changes with 0.168.0 ---------------- * distro: fix bug in variable substitution for static distros (osbuild/images#1710) * Author: Michael Vogt, Reviewers: Achilleas Koutsou, Simon de Vlieger * rhel{9,10}: azure for non-RHEL (HMS-8949) (osbuild/images#1707) * Author: Simon de Vlieger, Reviewers: Achilleas Koutsou, Michael Vogt — Somewhere on the Internet, 2025-07-30 ---
This commit is contained in:
parent
fad3b35d49
commit
6497b7520d
856 changed files with 72834 additions and 136836 deletions
5
vendor/github.com/containers/storage/.cirrus.yml
generated
vendored
5
vendor/github.com/containers/storage/.cirrus.yml
generated
vendored
|
|
@ -17,13 +17,13 @@ env:
|
|||
####
|
||||
#### Cache-image names to test with (double-quotes around names are critical)
|
||||
###
|
||||
FEDORA_NAME: "fedora-41"
|
||||
FEDORA_NAME: "fedora-42"
|
||||
DEBIAN_NAME: "debian-13"
|
||||
|
||||
# GCE project where images live
|
||||
IMAGE_PROJECT: "libpod-218412"
|
||||
# VM Image built in containers/automation_images
|
||||
IMAGE_SUFFIX: "c20250324t111922z-f41f40d13"
|
||||
IMAGE_SUFFIX: "c20250422t130822z-f42f41d13"
|
||||
FEDORA_CACHE_IMAGE_NAME: "fedora-${IMAGE_SUFFIX}"
|
||||
DEBIAN_CACHE_IMAGE_NAME: "debian-${IMAGE_SUFFIX}"
|
||||
|
||||
|
|
@ -128,6 +128,7 @@ lint_task:
|
|||
apt-get update
|
||||
apt-get install -y libbtrfs-dev libsubid-dev
|
||||
test_script: |
|
||||
[ -n "${CIRRUS_BASE_SHA}" ] && git fetch origin ${CIRRUS_BASE_SHA} # Make ${CIRRUS_BASE_SHA} resolvable for git-validation
|
||||
make TAGS=regex_precompile local-validate
|
||||
make lint
|
||||
make clean
|
||||
|
|
|
|||
2
vendor/github.com/containers/storage/Makefile
generated
vendored
2
vendor/github.com/containers/storage/Makefile
generated
vendored
|
|
@ -35,7 +35,7 @@ TESTFLAGS := $(shell $(GO) test -race $(BUILDFLAGS) ./pkg/stringutils 2>&1 > /de
|
|||
# N/B: This value is managed by Renovate, manual changes are
|
||||
# possible, as long as they don't disturb the formatting
|
||||
# (i.e. DO NOT ADD A 'v' prefix!)
|
||||
GOLANGCI_LINT_VERSION := 2.0.2
|
||||
GOLANGCI_LINT_VERSION := 2.2.1
|
||||
|
||||
default all: local-binary docs local-validate local-cross ## validate all checks, build and cross-build\nbinaries and docs
|
||||
|
||||
|
|
|
|||
2
vendor/github.com/containers/storage/VERSION
generated
vendored
2
vendor/github.com/containers/storage/VERSION
generated
vendored
|
|
@ -1 +1 @@
|
|||
1.58.0
|
||||
1.59.0
|
||||
|
|
|
|||
1
vendor/github.com/containers/storage/deprecated.go
generated
vendored
1
vendor/github.com/containers/storage/deprecated.go
generated
vendored
|
|
@ -207,7 +207,6 @@ type LayerStore interface {
|
|||
Mounted(id string) (int, error)
|
||||
ParentOwners(id string) (uids, gids []int, err error)
|
||||
ApplyDiff(to string, diff io.Reader) (int64, error)
|
||||
ApplyDiffWithDiffer(to string, options *drivers.ApplyDiffOpts, differ drivers.Differ) (*drivers.DriverWithDifferOutput, error)
|
||||
DifferTarget(id string) (string, error)
|
||||
LoadLocked() error
|
||||
PutAdditionalLayer(id string, parentLayer *Layer, names []string, aLayer drivers.AdditionalLayer) (layer *Layer, err error)
|
||||
|
|
|
|||
16
vendor/github.com/containers/storage/drivers/aufs/aufs.go
generated
vendored
16
vendor/github.com/containers/storage/drivers/aufs/aufs.go
generated
vendored
|
|
@ -36,6 +36,7 @@ import (
|
|||
"time"
|
||||
|
||||
graphdriver "github.com/containers/storage/drivers"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/chrootarchive"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
|
|
@ -772,8 +773,8 @@ func (a *Driver) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMapp
|
|||
return fmt.Errorf("aufs doesn't support changing ID mappings")
|
||||
}
|
||||
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs in an userNS
|
||||
func (a *Driver) SupportsShifting() bool {
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs to the provided mapping in an userNS
|
||||
func (a *Driver) SupportsShifting(uidmap, gidmap []idtools.IDMap) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
|
|
@ -781,3 +782,14 @@ func (a *Driver) SupportsShifting() bool {
|
|||
func (a *Driver) Dedup(req graphdriver.DedupArgs) (graphdriver.DedupResult, error) {
|
||||
return graphdriver.DedupResult{}, nil
|
||||
}
|
||||
|
||||
// DeferredRemove is not implemented.
|
||||
// It calls Remove directly.
|
||||
func (a *Driver) DeferredRemove(id string) (tempdir.CleanupTempDirFunc, error) {
|
||||
return nil, a.Remove(id)
|
||||
}
|
||||
|
||||
// GetTempDirRootDirs is not implemented.
|
||||
func (a *Driver) GetTempDirRootDirs() []string {
|
||||
return []string{}
|
||||
}
|
||||
|
|
|
|||
12
vendor/github.com/containers/storage/drivers/btrfs/btrfs.go
generated
vendored
12
vendor/github.com/containers/storage/drivers/btrfs/btrfs.go
generated
vendored
|
|
@ -30,6 +30,7 @@ import (
|
|||
"unsafe"
|
||||
|
||||
graphdriver "github.com/containers/storage/drivers"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
"github.com/containers/storage/pkg/fileutils"
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
|
|
@ -678,3 +679,14 @@ func (d *Driver) AdditionalImageStores() []string {
|
|||
func (d *Driver) Dedup(req graphdriver.DedupArgs) (graphdriver.DedupResult, error) {
|
||||
return graphdriver.DedupResult{}, nil
|
||||
}
|
||||
|
||||
// DeferredRemove is not implemented.
|
||||
// It calls Remove directly.
|
||||
func (d *Driver) DeferredRemove(id string) (tempdir.CleanupTempDirFunc, error) {
|
||||
return nil, d.Remove(id)
|
||||
}
|
||||
|
||||
// GetTempDirRootDirs is not implemented.
|
||||
func (d *Driver) GetTempDirRootDirs() []string {
|
||||
return []string{}
|
||||
}
|
||||
|
|
|
|||
4
vendor/github.com/containers/storage/drivers/chown.go
generated
vendored
4
vendor/github.com/containers/storage/drivers/chown.go
generated
vendored
|
|
@ -131,7 +131,7 @@ func (n *naiveLayerIDMapUpdater) UpdateLayerIDMap(id string, toContainer, toHost
|
|||
return ChownPathByMaps(layerFs, toContainer, toHost)
|
||||
}
|
||||
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs in an userNS
|
||||
func (n *naiveLayerIDMapUpdater) SupportsShifting() bool {
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs to the provided mapping in an userNS
|
||||
func (n *naiveLayerIDMapUpdater) SupportsShifting(uidmap, gidmap []idtools.IDMap) bool {
|
||||
return false
|
||||
}
|
||||
|
|
|
|||
23
vendor/github.com/containers/storage/drivers/driver.go
generated
vendored
23
vendor/github.com/containers/storage/drivers/driver.go
generated
vendored
|
|
@ -9,6 +9,7 @@ import (
|
|||
"strings"
|
||||
|
||||
"github.com/containers/storage/internal/dedup"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
"github.com/containers/storage/pkg/fileutils"
|
||||
|
|
@ -123,7 +124,17 @@ type ProtoDriver interface {
|
|||
// and parent, with contents identical to the specified template layer.
|
||||
CreateFromTemplate(id, template string, templateIDMappings *idtools.IDMappings, parent string, parentIDMappings *idtools.IDMappings, opts *CreateOpts, readWrite bool) error
|
||||
// Remove attempts to remove the filesystem layer with this id.
|
||||
// This is soft-deprecated and should not get any new callers; use DeferredRemove.
|
||||
Remove(id string) error
|
||||
// DeferredRemove is used to remove the filesystem layer with this id.
|
||||
// This removal happen immediately (the layer is no longer usable),
|
||||
// but physically deleting the files may be deferred.
|
||||
// Caller MUST call returned Cleanup function EVEN IF the function returns an error.
|
||||
DeferredRemove(id string) (tempdir.CleanupTempDirFunc, error)
|
||||
// GetTempDirRootDirs returns the root directories for temporary directories.
|
||||
// Multiple directories may be returned when drivers support different filesystems
|
||||
// for layers (e.g., overlay with imageStore vs home directory).
|
||||
GetTempDirRootDirs() []string
|
||||
// Get returns the mountpoint for the layered filesystem referred
|
||||
// to by this id. You can optionally specify a mountLabel or "".
|
||||
// Optionally it gets the mappings used to create the layer.
|
||||
|
|
@ -193,8 +204,9 @@ type LayerIDMapUpdater interface {
|
|||
UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMappings, mountLabel string) error
|
||||
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs in a
|
||||
// image and it is not required to Chown the files when running in an user namespace.
|
||||
SupportsShifting() bool
|
||||
// image to the provided mapping and it is not required to Chown the files when running in
|
||||
// an user namespace.
|
||||
SupportsShifting(uidmap, gidmap []idtools.IDMap) bool
|
||||
}
|
||||
|
||||
// Driver is the interface for layered/snapshot file system drivers.
|
||||
|
|
@ -216,8 +228,10 @@ type DriverWithDifferOutput struct {
|
|||
CompressedDigest digest.Digest
|
||||
Metadata string
|
||||
BigData map[string][]byte
|
||||
TarSplit []byte // nil if not available
|
||||
TOCDigest digest.Digest
|
||||
// TarSplit is owned by the [DriverWithDifferOutput], and must be closed by calling one of
|
||||
// [Store.ApplyStagedLayer]/[Store.CleanupStagedLayer]. It is nil if not available.
|
||||
TarSplit *os.File
|
||||
TOCDigest digest.Digest
|
||||
// RootDirMode is the mode of the root directory of the layer, if specified.
|
||||
RootDirMode *os.FileMode
|
||||
// Artifacts is a collection of additional artifacts
|
||||
|
|
@ -267,6 +281,7 @@ type DifferOptions struct {
|
|||
// This API is experimental and can be changed without bumping the major version number.
|
||||
type Differ interface {
|
||||
ApplyDiff(dest string, options *archive.TarOptions, differOpts *DifferOptions) (DriverWithDifferOutput, error)
|
||||
Close() error
|
||||
}
|
||||
|
||||
// DriverWithDiffer is the interface for direct diff access.
|
||||
|
|
|
|||
129
vendor/github.com/containers/storage/drivers/overlay/overlay.go
generated
vendored
129
vendor/github.com/containers/storage/drivers/overlay/overlay.go
generated
vendored
|
|
@ -23,6 +23,8 @@ import (
|
|||
"github.com/containers/storage/drivers/overlayutils"
|
||||
"github.com/containers/storage/drivers/quota"
|
||||
"github.com/containers/storage/internal/dedup"
|
||||
"github.com/containers/storage/internal/staging_lockfile"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/chrootarchive"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
|
|
@ -30,7 +32,6 @@ import (
|
|||
"github.com/containers/storage/pkg/fsutils"
|
||||
"github.com/containers/storage/pkg/idmap"
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
"github.com/containers/storage/pkg/lockfile"
|
||||
"github.com/containers/storage/pkg/mount"
|
||||
"github.com/containers/storage/pkg/parsers"
|
||||
"github.com/containers/storage/pkg/system"
|
||||
|
|
@ -80,10 +81,11 @@ const (
|
|||
// that mounts do not fail due to length.
|
||||
|
||||
const (
|
||||
linkDir = "l"
|
||||
stagingDir = "staging"
|
||||
lowerFile = "lower"
|
||||
maxDepth = 500
|
||||
linkDir = "l"
|
||||
stagingDir = "staging"
|
||||
tempDirName = "tempdirs"
|
||||
lowerFile = "lower"
|
||||
maxDepth = 500
|
||||
|
||||
stagingLockFile = "staging.lock"
|
||||
|
||||
|
|
@ -133,7 +135,7 @@ type Driver struct {
|
|||
stagingDirsLocksMutex sync.Mutex
|
||||
// stagingDirsLocks access is not thread safe, it is required that callers take
|
||||
// stagingDirsLocksMutex on each access to guard against concurrent map writes.
|
||||
stagingDirsLocks map[string]*lockfile.LockFile
|
||||
stagingDirsLocks map[string]*staging_lockfile.StagingLockFile
|
||||
|
||||
supportsIDMappedMounts *bool
|
||||
}
|
||||
|
|
@ -222,7 +224,7 @@ func checkAndRecordIDMappedSupport(home, runhome string) (bool, error) {
|
|||
return supportsIDMappedMounts, err
|
||||
}
|
||||
|
||||
func checkAndRecordOverlaySupport(fsMagic graphdriver.FsMagic, home, runhome string) (bool, error) {
|
||||
func checkAndRecordOverlaySupport(home, runhome string) (bool, error) {
|
||||
var supportsDType bool
|
||||
|
||||
if os.Geteuid() != 0 {
|
||||
|
|
@ -242,7 +244,7 @@ func checkAndRecordOverlaySupport(fsMagic graphdriver.FsMagic, home, runhome str
|
|||
return false, errors.New(overlayCacheText)
|
||||
}
|
||||
} else {
|
||||
supportsDType, err = supportsOverlay(home, fsMagic, 0, 0)
|
||||
supportsDType, err = supportsOverlay(home, 0, 0)
|
||||
if err != nil {
|
||||
os.Remove(filepath.Join(home, linkDir))
|
||||
os.Remove(home)
|
||||
|
|
@ -388,7 +390,7 @@ func Init(home string, options graphdriver.Options) (graphdriver.Driver, error)
|
|||
t := true
|
||||
supportsVolatile = &t
|
||||
} else {
|
||||
supportsDType, err = checkAndRecordOverlaySupport(fsMagic, home, runhome)
|
||||
supportsDType, err = checkAndRecordOverlaySupport(home, runhome)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -442,7 +444,7 @@ func Init(home string, options graphdriver.Options) (graphdriver.Driver, error)
|
|||
usingComposefs: opts.useComposefs,
|
||||
options: *opts,
|
||||
stagingDirsLocksMutex: sync.Mutex{},
|
||||
stagingDirsLocks: make(map[string]*lockfile.LockFile),
|
||||
stagingDirsLocks: make(map[string]*staging_lockfile.StagingLockFile),
|
||||
}
|
||||
|
||||
d.naiveDiff = graphdriver.NewNaiveDiffDriver(d, graphdriver.NewNaiveLayerIDMapUpdater(d))
|
||||
|
|
@ -666,16 +668,11 @@ func SupportsNativeOverlay(home, runhome string) (bool, error) {
|
|||
}
|
||||
}
|
||||
|
||||
fsMagic, err := graphdriver.GetFSMagic(home)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
supportsDType, _ := checkAndRecordOverlaySupport(fsMagic, home, runhome)
|
||||
supportsDType, _ := checkAndRecordOverlaySupport(home, runhome)
|
||||
return supportsDType, nil
|
||||
}
|
||||
|
||||
func supportsOverlay(home string, homeMagic graphdriver.FsMagic, rootUID, rootGID int) (supportsDType bool, err error) {
|
||||
func supportsOverlay(home string, rootUID, rootGID int) (supportsDType bool, err error) {
|
||||
selinuxLabelTest := selinux.PrivContainerMountLabel()
|
||||
|
||||
logLevel := logrus.ErrorLevel
|
||||
|
|
@ -828,7 +825,7 @@ func (d *Driver) Status() [][2]string {
|
|||
{"Supports d_type", strconv.FormatBool(d.supportsDType)},
|
||||
{"Native Overlay Diff", strconv.FormatBool(!d.useNaiveDiff())},
|
||||
{"Using metacopy", strconv.FormatBool(d.usingMetacopy)},
|
||||
{"Supports shifting", strconv.FormatBool(d.SupportsShifting())},
|
||||
{"Supports shifting", strconv.FormatBool(d.SupportsShifting(nil, nil))},
|
||||
{"Supports volatile", strconv.FormatBool(supportsVolatile)},
|
||||
}
|
||||
}
|
||||
|
|
@ -874,7 +871,9 @@ func (d *Driver) Cleanup() error {
|
|||
func (d *Driver) pruneStagingDirectories() bool {
|
||||
d.stagingDirsLocksMutex.Lock()
|
||||
for _, lock := range d.stagingDirsLocks {
|
||||
lock.Unlock()
|
||||
if err := lock.UnlockAndDelete(); err != nil {
|
||||
logrus.Warnf("Failed to unlock and delete staging lock file: %v", err)
|
||||
}
|
||||
}
|
||||
clear(d.stagingDirsLocks)
|
||||
d.stagingDirsLocksMutex.Unlock()
|
||||
|
|
@ -886,17 +885,15 @@ func (d *Driver) pruneStagingDirectories() bool {
|
|||
if err == nil {
|
||||
for _, dir := range dirs {
|
||||
stagingDirToRemove := filepath.Join(stagingDirBase, dir.Name())
|
||||
lock, err := lockfile.GetLockFile(filepath.Join(stagingDirToRemove, stagingLockFile))
|
||||
lock, err := staging_lockfile.TryLockPath(filepath.Join(stagingDirToRemove, stagingLockFile))
|
||||
if err != nil {
|
||||
anyPresent = true
|
||||
continue
|
||||
}
|
||||
if err := lock.TryLock(); err != nil {
|
||||
anyPresent = true
|
||||
continue
|
||||
}
|
||||
_ = os.RemoveAll(stagingDirToRemove)
|
||||
lock.Unlock()
|
||||
if err := lock.UnlockAndDelete(); err != nil {
|
||||
logrus.Warnf("Failed to unlock and delete staging lock file: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return anyPresent
|
||||
|
|
@ -1310,17 +1307,22 @@ func (d *Driver) optsAppendMappings(opts string, uidMaps, gidMaps []idtools.IDMa
|
|||
|
||||
// Remove cleans the directories that are created for this id.
|
||||
func (d *Driver) Remove(id string) error {
|
||||
return d.removeCommon(id, system.EnsureRemoveAll)
|
||||
}
|
||||
|
||||
func (d *Driver) removeCommon(id string, cleanup func(string) error) error {
|
||||
dir := d.dir(id)
|
||||
lid, err := os.ReadFile(path.Join(dir, "link"))
|
||||
if err == nil {
|
||||
if err := os.RemoveAll(path.Join(d.home, linkDir, string(lid))); err != nil {
|
||||
linkPath := path.Join(d.home, linkDir, string(lid))
|
||||
if err := cleanup(linkPath); err != nil {
|
||||
logrus.Debugf("Failed to remove link: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
d.releaseAdditionalLayerByID(id)
|
||||
|
||||
if err := system.EnsureRemoveAll(dir); err != nil && !os.IsNotExist(err) {
|
||||
if err := cleanup(dir); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
if d.quotaCtl != nil {
|
||||
|
|
@ -1332,6 +1334,41 @@ func (d *Driver) Remove(id string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (d *Driver) GetTempDirRootDirs() []string {
|
||||
tempDirs := []string{filepath.Join(d.home, tempDirName)}
|
||||
// Include imageStore temp directory if it's configured
|
||||
// Writable layers can only be in d.home or d.imageStore, not in additional image stores
|
||||
if d.imageStore != "" {
|
||||
tempDirs = append(tempDirs, filepath.Join(d.imageStore, d.name, tempDirName))
|
||||
}
|
||||
return tempDirs
|
||||
}
|
||||
|
||||
// Determine the correct temp directory root based on where the layer actually exists.
|
||||
func (d *Driver) getTempDirRoot(id string) string {
|
||||
layerDir := d.dir(id)
|
||||
if d.imageStore != "" {
|
||||
expectedLayerDir := path.Join(d.imageStore, d.name, id)
|
||||
if layerDir == expectedLayerDir {
|
||||
return filepath.Join(d.imageStore, d.name, tempDirName)
|
||||
}
|
||||
}
|
||||
return filepath.Join(d.home, tempDirName)
|
||||
}
|
||||
|
||||
func (d *Driver) DeferredRemove(id string) (tempdir.CleanupTempDirFunc, error) {
|
||||
tempDirRoot := d.getTempDirRoot(id)
|
||||
t, err := tempdir.NewTempDir(tempDirRoot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := d.removeCommon(id, t.StageDeletion); err != nil {
|
||||
return t.Cleanup, fmt.Errorf("failed to add to stage directory: %w", err)
|
||||
}
|
||||
return t.Cleanup, nil
|
||||
}
|
||||
|
||||
// recreateSymlinks goes through the driver's home directory and checks if the diff directory
|
||||
// under each layer has a symlink created for it under the linkDir. If the symlink does not
|
||||
// exist, it creates them
|
||||
|
|
@ -1358,8 +1395,8 @@ func (d *Driver) recreateSymlinks() error {
|
|||
// Check that for each layer, there's a link in "l" with the name in
|
||||
// the layer's "link" file that points to the layer's "diff" directory.
|
||||
for _, dir := range dirs {
|
||||
// Skip over the linkDir and anything that is not a directory
|
||||
if dir.Name() == linkDir || !dir.IsDir() {
|
||||
// Skip over the linkDir, stagingDir, tempDirName and anything that is not a directory
|
||||
if dir.Name() == linkDir || dir.Name() == stagingDir || dir.Name() == tempDirName || !dir.IsDir() {
|
||||
continue
|
||||
}
|
||||
// Read the "link" file under each layer to get the name of the symlink
|
||||
|
|
@ -1483,7 +1520,7 @@ func (d *Driver) get(id string, disableShifting bool, options graphdriver.MountO
|
|||
|
||||
readWrite := !inAdditionalStore
|
||||
|
||||
if !d.SupportsShifting() || options.DisableShifting {
|
||||
if !d.SupportsShifting(options.UidMaps, options.GidMaps) || options.DisableShifting {
|
||||
disableShifting = true
|
||||
}
|
||||
|
||||
|
|
@ -2027,7 +2064,7 @@ func (d *Driver) ListLayers() ([]string, error) {
|
|||
for _, entry := range entries {
|
||||
id := entry.Name()
|
||||
switch id {
|
||||
case linkDir, stagingDir, quota.BackingFsBlockDeviceLink, mountProgramFlagFile:
|
||||
case linkDir, stagingDir, tempDirName, quota.BackingFsBlockDeviceLink, mountProgramFlagFile:
|
||||
// expected, but not a layer. skip it
|
||||
continue
|
||||
default:
|
||||
|
|
@ -2178,7 +2215,10 @@ func (d *Driver) CleanupStagingDirectory(stagingDirectory string) error {
|
|||
d.stagingDirsLocksMutex.Lock()
|
||||
if lock, ok := d.stagingDirsLocks[parentStagingDir]; ok {
|
||||
delete(d.stagingDirsLocks, parentStagingDir)
|
||||
lock.Unlock()
|
||||
if err := lock.UnlockAndDelete(); err != nil {
|
||||
d.stagingDirsLocksMutex.Unlock()
|
||||
return err
|
||||
}
|
||||
}
|
||||
d.stagingDirsLocksMutex.Unlock()
|
||||
|
||||
|
|
@ -2233,7 +2273,7 @@ func (d *Driver) ApplyDiffWithDiffer(options *graphdriver.ApplyDiffWithDifferOpt
|
|||
return graphdriver.DriverWithDifferOutput{}, err
|
||||
}
|
||||
|
||||
lock, err := lockfile.GetLockFile(filepath.Join(layerDir, stagingLockFile))
|
||||
lock, err := staging_lockfile.TryLockPath(filepath.Join(layerDir, stagingLockFile))
|
||||
if err != nil {
|
||||
return graphdriver.DriverWithDifferOutput{}, err
|
||||
}
|
||||
|
|
@ -2242,13 +2282,14 @@ func (d *Driver) ApplyDiffWithDiffer(options *graphdriver.ApplyDiffWithDifferOpt
|
|||
d.stagingDirsLocksMutex.Lock()
|
||||
delete(d.stagingDirsLocks, layerDir)
|
||||
d.stagingDirsLocksMutex.Unlock()
|
||||
lock.Unlock()
|
||||
if err := lock.UnlockAndDelete(); err != nil {
|
||||
errRet = errors.Join(errRet, err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
d.stagingDirsLocksMutex.Lock()
|
||||
d.stagingDirsLocks[layerDir] = lock
|
||||
d.stagingDirsLocksMutex.Unlock()
|
||||
lock.Lock()
|
||||
|
||||
logrus.Debugf("Applying differ in %s", applyDir)
|
||||
|
||||
|
|
@ -2274,7 +2315,7 @@ func (d *Driver) ApplyDiffWithDiffer(options *graphdriver.ApplyDiffWithDifferOpt
|
|||
}
|
||||
|
||||
// ApplyDiffFromStagingDirectory applies the changes using the specified staging directory.
|
||||
func (d *Driver) ApplyDiffFromStagingDirectory(id, parent string, diffOutput *graphdriver.DriverWithDifferOutput, options *graphdriver.ApplyDiffWithDifferOpts) error {
|
||||
func (d *Driver) ApplyDiffFromStagingDirectory(id, parent string, diffOutput *graphdriver.DriverWithDifferOutput, options *graphdriver.ApplyDiffWithDifferOpts) (errRet error) {
|
||||
stagingDirectory := diffOutput.Target
|
||||
parentStagingDir := filepath.Dir(stagingDirectory)
|
||||
|
||||
|
|
@ -2282,7 +2323,9 @@ func (d *Driver) ApplyDiffFromStagingDirectory(id, parent string, diffOutput *gr
|
|||
d.stagingDirsLocksMutex.Lock()
|
||||
if lock, ok := d.stagingDirsLocks[parentStagingDir]; ok {
|
||||
delete(d.stagingDirsLocks, parentStagingDir)
|
||||
lock.Unlock()
|
||||
if err := lock.UnlockAndDelete(); err != nil {
|
||||
errRet = errors.Join(errRet, err)
|
||||
}
|
||||
}
|
||||
d.stagingDirsLocksMutex.Unlock()
|
||||
}()
|
||||
|
|
@ -2553,12 +2596,20 @@ func (d *Driver) supportsIDmappedMounts() bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs in an userNS
|
||||
func (d *Driver) SupportsShifting() bool {
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs to the provided mapping in an userNS
|
||||
func (d *Driver) SupportsShifting(uidmap, gidmap []idtools.IDMap) bool {
|
||||
if os.Getenv("_CONTAINERS_OVERLAY_DISABLE_IDMAP") == "yes" {
|
||||
return false
|
||||
}
|
||||
if d.options.mountProgram != "" {
|
||||
// fuse-overlayfs supports only contiguous mappings, since it performs the mapping on the
|
||||
// upper layer too, to avoid https://github.com/containers/podman/issues/10272
|
||||
if !idtools.IsContiguous(uidmap) {
|
||||
return false
|
||||
}
|
||||
if !idtools.IsContiguous(gidmap) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
return d.supportsIDmappedMounts()
|
||||
|
|
|
|||
48
vendor/github.com/containers/storage/drivers/vfs/driver.go
generated
vendored
48
vendor/github.com/containers/storage/drivers/vfs/driver.go
generated
vendored
|
|
@ -11,6 +11,7 @@ import (
|
|||
|
||||
graphdriver "github.com/containers/storage/drivers"
|
||||
"github.com/containers/storage/internal/dedup"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
"github.com/containers/storage/pkg/fileutils"
|
||||
|
|
@ -22,7 +23,10 @@ import (
|
|||
"github.com/vbatts/tar-split/tar/storage"
|
||||
)
|
||||
|
||||
const defaultPerms = os.FileMode(0o555)
|
||||
const (
|
||||
defaultPerms = os.FileMode(0o555)
|
||||
tempDirName = "tempdirs"
|
||||
)
|
||||
|
||||
func init() {
|
||||
graphdriver.MustRegister("vfs", Init)
|
||||
|
|
@ -244,6 +248,42 @@ func (d *Driver) Remove(id string) error {
|
|||
return system.EnsureRemoveAll(d.dir(id))
|
||||
}
|
||||
|
||||
func (d *Driver) GetTempDirRootDirs() []string {
|
||||
tempDirs := []string{filepath.Join(d.home, tempDirName)}
|
||||
// Include imageStore temp directory if it's configured
|
||||
// Writable layers can only be in d.home or d.imageStore, not in additionalHomes (which are read-only)
|
||||
if d.imageStore != "" {
|
||||
tempDirs = append(tempDirs, filepath.Join(d.imageStore, d.String(), tempDirName))
|
||||
}
|
||||
return tempDirs
|
||||
}
|
||||
|
||||
// Determine the correct temp directory root based on where the layer actually exists.
|
||||
func (d *Driver) getTempDirRoot(id string) string {
|
||||
layerDir := d.dir(id)
|
||||
if d.imageStore != "" {
|
||||
expectedLayerDir := filepath.Join(d.imageStore, d.String(), "dir", filepath.Base(id))
|
||||
if layerDir == expectedLayerDir {
|
||||
return filepath.Join(d.imageStore, d.String(), tempDirName)
|
||||
}
|
||||
}
|
||||
return filepath.Join(d.home, tempDirName)
|
||||
}
|
||||
|
||||
func (d *Driver) DeferredRemove(id string) (tempdir.CleanupTempDirFunc, error) {
|
||||
tempDirRoot := d.getTempDirRoot(id)
|
||||
t, err := tempdir.NewTempDir(tempDirRoot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
layerDir := d.dir(id)
|
||||
if err := t.StageDeletion(layerDir); err != nil {
|
||||
return t.Cleanup, err
|
||||
}
|
||||
return t.Cleanup, nil
|
||||
}
|
||||
|
||||
// Get returns the directory for the given id.
|
||||
func (d *Driver) Get(id string, options graphdriver.MountOpts) (_ string, retErr error) {
|
||||
dir := d.dir(id)
|
||||
|
|
@ -312,9 +352,9 @@ func (d *Driver) AdditionalImageStores() []string {
|
|||
return nil
|
||||
}
|
||||
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs in an userNS
|
||||
func (d *Driver) SupportsShifting() bool {
|
||||
return d.updater.SupportsShifting()
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs to the provided mapping in an userNS
|
||||
func (d *Driver) SupportsShifting(uidmap, gidmap []idtools.IDMap) bool {
|
||||
return d.updater.SupportsShifting(uidmap, gidmap)
|
||||
}
|
||||
|
||||
// UpdateLayerIDMap updates ID mappings in a from matching the ones specified
|
||||
|
|
|
|||
16
vendor/github.com/containers/storage/drivers/windows/windows.go
generated
vendored
16
vendor/github.com/containers/storage/drivers/windows/windows.go
generated
vendored
|
|
@ -24,6 +24,7 @@ import (
|
|||
"github.com/Microsoft/go-winio/backuptar"
|
||||
"github.com/Microsoft/hcsshim"
|
||||
graphdriver "github.com/containers/storage/drivers"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
"github.com/containers/storage/pkg/fileutils"
|
||||
|
|
@ -986,8 +987,8 @@ func (d *Driver) UpdateLayerIDMap(id string, toContainer, toHost *idtools.IDMapp
|
|||
return fmt.Errorf("windows doesn't support changing ID mappings")
|
||||
}
|
||||
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs in an userNS
|
||||
func (d *Driver) SupportsShifting() bool {
|
||||
// SupportsShifting tells whether the driver support shifting of the UIDs/GIDs to the provided mapping in an userNS
|
||||
func (d *Driver) SupportsShifting(uidmap, gidmap []idtools.IDMap) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
|
|
@ -1014,3 +1015,14 @@ func parseStorageOpt(storageOpt map[string]string) (*storageOptions, error) {
|
|||
}
|
||||
return &options, nil
|
||||
}
|
||||
|
||||
// DeferredRemove is not implemented.
|
||||
// It calls Remove directly.
|
||||
func (d *Driver) DeferredRemove(id string) (tempdir.CleanupTempDirFunc, error) {
|
||||
return nil, d.Remove(id)
|
||||
}
|
||||
|
||||
// GetTempDirRootDirs is not implemented.
|
||||
func (d *Driver) GetTempDirRootDirs() []string {
|
||||
return []string{}
|
||||
}
|
||||
|
|
|
|||
12
vendor/github.com/containers/storage/drivers/zfs/zfs.go
generated
vendored
12
vendor/github.com/containers/storage/drivers/zfs/zfs.go
generated
vendored
|
|
@ -13,6 +13,7 @@ import (
|
|||
"time"
|
||||
|
||||
graphdriver "github.com/containers/storage/drivers"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
"github.com/containers/storage/pkg/mount"
|
||||
|
|
@ -406,6 +407,12 @@ func (d *Driver) Remove(id string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// DeferredRemove is not implemented.
|
||||
// It calls Remove directly.
|
||||
func (d *Driver) DeferredRemove(id string) (tempdir.CleanupTempDirFunc, error) {
|
||||
return nil, d.Remove(id)
|
||||
}
|
||||
|
||||
// Get returns the mountpoint for the given id after creating the target directories if necessary.
|
||||
func (d *Driver) Get(id string, options graphdriver.MountOpts) (_ string, retErr error) {
|
||||
mountpoint := d.mountPath(id)
|
||||
|
|
@ -516,3 +523,8 @@ func (d *Driver) AdditionalImageStores() []string {
|
|||
func (d *Driver) Dedup(req graphdriver.DedupArgs) (graphdriver.DedupResult, error) {
|
||||
return graphdriver.DedupResult{}, nil
|
||||
}
|
||||
|
||||
// GetTempDirRootDirs is not implemented.
|
||||
func (d *Driver) GetTempDirRootDirs() []string {
|
||||
return []string{}
|
||||
}
|
||||
|
|
|
|||
64
vendor/github.com/containers/storage/internal/rawfilelock/rawfilelock.go
generated
vendored
Normal file
64
vendor/github.com/containers/storage/internal/rawfilelock/rawfilelock.go
generated
vendored
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
package rawfilelock
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
type LockType byte
|
||||
|
||||
const (
|
||||
ReadLock LockType = iota
|
||||
WriteLock
|
||||
)
|
||||
|
||||
type FileHandle = fileHandle
|
||||
|
||||
// OpenLock opens a file for locking
|
||||
// WARNING: This is the underlying file locking primitive of the OS;
|
||||
// because closing FileHandle releases the lock, it is not suitable for use
|
||||
// if there is any chance of two concurrent goroutines attempting to use the same lock.
|
||||
// Most users should use the higher-level operations from internal/staging_lockfile or pkg/lockfile.
|
||||
func OpenLock(path string, readOnly bool) (FileHandle, error) {
|
||||
flags := os.O_CREATE
|
||||
if readOnly {
|
||||
flags |= os.O_RDONLY
|
||||
} else {
|
||||
flags |= os.O_RDWR
|
||||
}
|
||||
|
||||
fd, err := openHandle(path, flags)
|
||||
if err == nil {
|
||||
return fd, nil
|
||||
}
|
||||
|
||||
return fd, &os.PathError{Op: "open", Path: path, Err: err}
|
||||
}
|
||||
|
||||
// TryLockFile attempts to lock a file handle
|
||||
func TryLockFile(fd FileHandle, lockType LockType) error {
|
||||
return lockHandle(fd, lockType, true)
|
||||
}
|
||||
|
||||
// LockFile locks a file handle
|
||||
func LockFile(fd FileHandle, lockType LockType) error {
|
||||
return lockHandle(fd, lockType, false)
|
||||
}
|
||||
|
||||
// UnlockAndClose unlocks and closes a file handle
|
||||
func UnlockAndCloseHandle(fd FileHandle) {
|
||||
unlockAndCloseHandle(fd)
|
||||
}
|
||||
|
||||
// CloseHandle closes a file handle without unlocking
|
||||
//
|
||||
// WARNING: This is a last-resort function for error handling only!
|
||||
// On Unix systems, closing a file descriptor automatically releases any locks,
|
||||
// so "closing without unlocking" is impossible. This function will release
|
||||
// the lock as a side effect of closing the file.
|
||||
//
|
||||
// This function should only be used in error paths where the lock state
|
||||
// is already corrupted or when giving up on lock management entirely.
|
||||
// Normal code should use UnlockAndCloseHandle instead.
|
||||
func CloseHandle(fd FileHandle) {
|
||||
closeHandle(fd)
|
||||
}
|
||||
49
vendor/github.com/containers/storage/internal/rawfilelock/rawfilelock_unix.go
generated
vendored
Normal file
49
vendor/github.com/containers/storage/internal/rawfilelock/rawfilelock_unix.go
generated
vendored
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
//go:build !windows
|
||||
|
||||
package rawfilelock
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
type fileHandle uintptr
|
||||
|
||||
func openHandle(path string, mode int) (fileHandle, error) {
|
||||
mode |= unix.O_CLOEXEC
|
||||
fd, err := unix.Open(path, mode, 0o644)
|
||||
return fileHandle(fd), err
|
||||
}
|
||||
|
||||
func lockHandle(fd fileHandle, lType LockType, nonblocking bool) error {
|
||||
fType := unix.F_RDLCK
|
||||
if lType != ReadLock {
|
||||
fType = unix.F_WRLCK
|
||||
}
|
||||
lk := unix.Flock_t{
|
||||
Type: int16(fType),
|
||||
Whence: int16(unix.SEEK_SET),
|
||||
Start: 0,
|
||||
Len: 0,
|
||||
}
|
||||
cmd := unix.F_SETLKW
|
||||
if nonblocking {
|
||||
cmd = unix.F_SETLK
|
||||
}
|
||||
for {
|
||||
err := unix.FcntlFlock(uintptr(fd), cmd, &lk)
|
||||
if err == nil || nonblocking {
|
||||
return err
|
||||
}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func unlockAndCloseHandle(fd fileHandle) {
|
||||
unix.Close(int(fd))
|
||||
}
|
||||
|
||||
func closeHandle(fd fileHandle) {
|
||||
unix.Close(int(fd))
|
||||
}
|
||||
48
vendor/github.com/containers/storage/internal/rawfilelock/rawfilelock_windows.go
generated
vendored
Normal file
48
vendor/github.com/containers/storage/internal/rawfilelock/rawfilelock_windows.go
generated
vendored
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
//go:build windows
|
||||
|
||||
package rawfilelock
|
||||
|
||||
import (
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
const (
|
||||
reserved = 0
|
||||
allBytes = ^uint32(0)
|
||||
)
|
||||
|
||||
type fileHandle windows.Handle
|
||||
|
||||
func openHandle(path string, mode int) (fileHandle, error) {
|
||||
mode |= windows.O_CLOEXEC
|
||||
fd, err := windows.Open(path, mode, windows.S_IWRITE)
|
||||
return fileHandle(fd), err
|
||||
}
|
||||
|
||||
func lockHandle(fd fileHandle, lType LockType, nonblocking bool) error {
|
||||
flags := 0
|
||||
if lType != ReadLock {
|
||||
flags = windows.LOCKFILE_EXCLUSIVE_LOCK
|
||||
}
|
||||
if nonblocking {
|
||||
flags |= windows.LOCKFILE_FAIL_IMMEDIATELY
|
||||
}
|
||||
ol := new(windows.Overlapped)
|
||||
if err := windows.LockFileEx(windows.Handle(fd), uint32(flags), reserved, allBytes, allBytes, ol); err != nil {
|
||||
if nonblocking {
|
||||
return err
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func unlockAndCloseHandle(fd fileHandle) {
|
||||
ol := new(windows.Overlapped)
|
||||
windows.UnlockFileEx(windows.Handle(fd), reserved, allBytes, allBytes, ol)
|
||||
closeHandle(fd)
|
||||
}
|
||||
|
||||
func closeHandle(fd fileHandle) {
|
||||
windows.Close(windows.Handle(fd))
|
||||
}
|
||||
147
vendor/github.com/containers/storage/internal/staging_lockfile/staging_lockfile.go
generated
vendored
Normal file
147
vendor/github.com/containers/storage/internal/staging_lockfile/staging_lockfile.go
generated
vendored
Normal file
|
|
@ -0,0 +1,147 @@
|
|||
package staging_lockfile
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
|
||||
"github.com/containers/storage/internal/rawfilelock"
|
||||
)
|
||||
|
||||
// StagingLockFile represents a file lock used to coordinate access to staging areas.
|
||||
// Typical usage is via CreateAndLock or TryLockPath, both of which return a StagingLockFile
|
||||
// that must eventually be released with UnlockAndDelete. This ensures that access
|
||||
// to the staging file is properly synchronized both within and across processes.
|
||||
//
|
||||
// WARNING: This struct MUST NOT be created manually. Use the provided helper functions instead.
|
||||
type StagingLockFile struct {
|
||||
// Locking invariant: If stagingLockFileLock is not locked, a StagingLockFile for a particular
|
||||
// path exists if the current process currently owns the lock for that file, and it is recorded in stagingLockFiles.
|
||||
//
|
||||
// The following fields can only be accessed by the goroutine owning the lock.
|
||||
//
|
||||
// An empty string in the file field means that the lock has been released and the StagingLockFile is no longer valid.
|
||||
file string // Also the key in stagingLockFiles
|
||||
fd rawfilelock.FileHandle
|
||||
}
|
||||
|
||||
const maxRetries = 1000
|
||||
|
||||
var (
|
||||
stagingLockFiles map[string]*StagingLockFile
|
||||
stagingLockFileLock sync.Mutex
|
||||
)
|
||||
|
||||
// tryAcquireLockForFile attempts to acquire a lock for the specified file path.
|
||||
func tryAcquireLockForFile(path string) (*StagingLockFile, error) {
|
||||
cleanPath, err := filepath.Abs(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("ensuring that path %q is an absolute path: %w", path, err)
|
||||
}
|
||||
|
||||
stagingLockFileLock.Lock()
|
||||
defer stagingLockFileLock.Unlock()
|
||||
|
||||
if stagingLockFiles == nil {
|
||||
stagingLockFiles = make(map[string]*StagingLockFile)
|
||||
}
|
||||
|
||||
if _, ok := stagingLockFiles[cleanPath]; ok {
|
||||
return nil, fmt.Errorf("lock %q is used already with other thread", cleanPath)
|
||||
}
|
||||
|
||||
fd, err := rawfilelock.OpenLock(cleanPath, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err = rawfilelock.TryLockFile(fd, rawfilelock.WriteLock); err != nil {
|
||||
// Lock acquisition failed, but holding stagingLockFileLock ensures
|
||||
// no other goroutine in this process could have obtained a lock for this file,
|
||||
// so closing it is still safe.
|
||||
rawfilelock.CloseHandle(fd)
|
||||
return nil, fmt.Errorf("failed to acquire lock on %q: %w", cleanPath, err)
|
||||
}
|
||||
|
||||
lockFile := &StagingLockFile{
|
||||
file: cleanPath,
|
||||
fd: fd,
|
||||
}
|
||||
|
||||
stagingLockFiles[cleanPath] = lockFile
|
||||
return lockFile, nil
|
||||
}
|
||||
|
||||
// UnlockAndDelete releases the lock, removes the associated file from the filesystem.
|
||||
//
|
||||
// WARNING: After this operation, the StagingLockFile becomes invalid for further use.
|
||||
func (l *StagingLockFile) UnlockAndDelete() error {
|
||||
stagingLockFileLock.Lock()
|
||||
defer stagingLockFileLock.Unlock()
|
||||
|
||||
if l.file == "" {
|
||||
// Panic when unlocking an unlocked lock. That's a violation
|
||||
// of the lock semantics and will reveal such.
|
||||
panic("calling Unlock on unlocked lock")
|
||||
}
|
||||
|
||||
defer func() {
|
||||
// It’s important that this happens while we are still holding stagingLockFileLock, to ensure
|
||||
// that no other goroutine has l.file open = that this close is not unlocking the lock under any
|
||||
// other goroutine. (defer ordering is LIFO, so this will happen before we release the stagingLockFileLock)
|
||||
rawfilelock.UnlockAndCloseHandle(l.fd)
|
||||
delete(stagingLockFiles, l.file)
|
||||
l.file = ""
|
||||
}()
|
||||
if err := os.Remove(l.file); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateAndLock creates a new temporary file in the specified directory with the given pattern,
|
||||
// then creates and locks a StagingLockFile for it. The file is created using os.CreateTemp.
|
||||
// Typically, the caller would use the returned lock file path to derive a path to the lock-controlled resource
|
||||
// (e.g. by replacing the "pattern" part of the returned file name with a different prefix)
|
||||
// Caller MUST call UnlockAndDelete() on the returned StagingLockFile to release the lock and delete the file.
|
||||
//
|
||||
// Returns:
|
||||
// - The locked StagingLockFile
|
||||
// - The name of created lock file
|
||||
// - Any error that occurred during the process
|
||||
//
|
||||
// If the file cannot be locked, this function will retry up to maxRetries times before failing.
|
||||
func CreateAndLock(dir string, pattern string) (*StagingLockFile, string, error) {
|
||||
for try := 0; ; try++ {
|
||||
file, err := os.CreateTemp(dir, pattern)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
file.Close()
|
||||
|
||||
path := file.Name()
|
||||
l, err := tryAcquireLockForFile(path)
|
||||
if err != nil {
|
||||
if try < maxRetries {
|
||||
continue // Retry if the lock cannot be acquired
|
||||
}
|
||||
return nil, "", fmt.Errorf(
|
||||
"failed to allocate lock in %q after %d attempts; last failure on %q: %w",
|
||||
dir, try, filepath.Base(path), err,
|
||||
)
|
||||
}
|
||||
|
||||
return l, filepath.Base(path), nil
|
||||
}
|
||||
}
|
||||
|
||||
// TryLockPath attempts to acquire a lock on an specific path. If the file does not exist,
|
||||
// it will be created.
|
||||
//
|
||||
// Warning: If acquiring a lock is successful, it returns a new StagingLockFile
|
||||
// instance for the file. Caller MUST call UnlockAndDelete() on the returned StagingLockFile
|
||||
// to release the lock and delete the file.
|
||||
func TryLockPath(path string) (*StagingLockFile, error) {
|
||||
return tryAcquireLockForFile(path)
|
||||
}
|
||||
243
vendor/github.com/containers/storage/internal/tempdir/tempdir.go
generated
vendored
Normal file
243
vendor/github.com/containers/storage/internal/tempdir/tempdir.go
generated
vendored
Normal file
|
|
@ -0,0 +1,243 @@
|
|||
package tempdir
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/storage/internal/staging_lockfile"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
/*
|
||||
Locking rules and invariants for TempDir and its recovery mechanism:
|
||||
|
||||
1. TempDir Instance Locks:
|
||||
- Path: 'RootDir/lock-XYZ' (in the root directory)
|
||||
- Each TempDir instance creates and holds an exclusive lock on this file immediately
|
||||
during NewTempDir() initialization.
|
||||
- This lock signifies that the temporary directory is in active use by the
|
||||
process/goroutine that holds the TempDir object.
|
||||
|
||||
2. Stale Directory Recovery (separate operation):
|
||||
- RecoverStaleDirs() can be called independently to identify and clean up stale
|
||||
temporary directories.
|
||||
- For each potential stale directory (found by listPotentialStaleDirs), it
|
||||
attempts to TryLockPath() its instance lock file.
|
||||
- If TryLockPath() succeeds: The directory is considered stale, and both the
|
||||
directory and lock file are removed.
|
||||
- If TryLockPath() fails: The directory is considered in active use by another
|
||||
process/goroutine, and it's skipped.
|
||||
|
||||
3. TempDir Usage:
|
||||
- NewTempDir() immediately creates both the instance lock and the temporary directory.
|
||||
- TempDir.StageDeletion() moves files into the existing temporary directory with counter-based naming.
|
||||
- Files moved into the temporary directory are renamed with a counter-based prefix
|
||||
to ensure uniqueness (e.g., "0-filename", "1-filename").
|
||||
- Once cleaned up, the TempDir instance cannot be reused - StageDeletion() will return an error.
|
||||
|
||||
4. Cleanup Process:
|
||||
- TempDir.Cleanup() removes both the temporary directory and its lock file.
|
||||
- The instance lock is unlocked and deleted after cleanup operations are complete.
|
||||
- The TempDir instance becomes inactive after cleanup (internal fields are reset).
|
||||
- The TempDir instance cannot be reused after Cleanup() - StageDeletion() will fail.
|
||||
|
||||
5. TempDir Lifetime:
|
||||
- NewTempDir() creates both the TempDir manager and the actual temporary directory immediately.
|
||||
- The temporary directory is created eagerly during NewTempDir().
|
||||
- During its lifetime, the temporary directory is protected by its instance lock.
|
||||
- The temporary directory exists until Cleanup() is called, which removes both
|
||||
the directory and its lock file.
|
||||
- Multiple TempDir instances can coexist in the same RootDir, each with its own
|
||||
unique subdirectory and lock.
|
||||
- After cleanup, the TempDir instance cannot be reused.
|
||||
|
||||
6. Example Directory Structure:
|
||||
|
||||
RootDir/
|
||||
lock-ABC (instance lock for temp-dir-ABC)
|
||||
temp-dir-ABC/
|
||||
0-file1
|
||||
1-file3
|
||||
lock-XYZ (instance lock for temp-dir-XYZ)
|
||||
temp-dir-XYZ/
|
||||
0-file2
|
||||
*/
|
||||
const (
|
||||
// tempDirPrefix is the prefix used for creating temporary directories.
|
||||
tempDirPrefix = "temp-dir-"
|
||||
// tempdirLockPrefix is the prefix used for creating lock files for temporary directories.
|
||||
tempdirLockPrefix = "lock-"
|
||||
)
|
||||
|
||||
// TempDir represents a temporary directory that is created in a specified root directory.
|
||||
// It manages the lifecycle of the temporary directory, including creation, locking, and cleanup.
|
||||
// Each TempDir instance is associated with a unique subdirectory in the root directory.
|
||||
// Warning: The TempDir instance should be used in a single goroutine.
|
||||
type TempDir struct {
|
||||
RootDir string
|
||||
|
||||
tempDirPath string
|
||||
// tempDirLock is a lock file (e.g., RootDir/lock-XYZ) specific to this
|
||||
// TempDir instance, indicating it's in active use.
|
||||
tempDirLock *staging_lockfile.StagingLockFile
|
||||
tempDirLockPath string
|
||||
|
||||
// counter is used to generate unique filenames for added files.
|
||||
counter uint64
|
||||
}
|
||||
|
||||
// CleanupTempDirFunc is a function type that can be returned by operations
|
||||
// which need to perform cleanup actions later.
|
||||
type CleanupTempDirFunc func() error
|
||||
|
||||
// listPotentialStaleDirs scans the RootDir for directories that might be stale temporary directories.
|
||||
// It identifies directories with the tempDirPrefix and their corresponding lock files with the tempdirLockPrefix.
|
||||
// The function returns a map of IDs that correspond to both directories and lock files found.
|
||||
// These IDs are extracted from the filenames by removing their respective prefixes.
|
||||
func listPotentialStaleDirs(rootDir string) (map[string]struct{}, error) {
|
||||
ids := make(map[string]struct{})
|
||||
|
||||
dirContent, err := os.ReadDir(rootDir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("error reading temp dir %s: %w", rootDir, err)
|
||||
}
|
||||
|
||||
for _, entry := range dirContent {
|
||||
if id, ok := strings.CutPrefix(entry.Name(), tempDirPrefix); ok {
|
||||
ids[id] = struct{}{}
|
||||
continue
|
||||
}
|
||||
|
||||
if id, ok := strings.CutPrefix(entry.Name(), tempdirLockPrefix); ok {
|
||||
ids[id] = struct{}{}
|
||||
}
|
||||
}
|
||||
return ids, nil
|
||||
}
|
||||
|
||||
// RecoverStaleDirs identifies and removes stale temporary directories in the root directory.
|
||||
// A directory is considered stale if its lock file can be acquired (indicating no active use).
|
||||
// The function attempts to remove both the directory and its lock file.
|
||||
// If a directory's lock cannot be acquired, it is considered in use and is skipped.
|
||||
func RecoverStaleDirs(rootDir string) error {
|
||||
potentialStaleDirs, err := listPotentialStaleDirs(rootDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error listing potential stale temp dirs in %s: %w", rootDir, err)
|
||||
}
|
||||
|
||||
if len(potentialStaleDirs) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
var recoveryErrors []error
|
||||
|
||||
for id := range potentialStaleDirs {
|
||||
lockPath := filepath.Join(rootDir, tempdirLockPrefix+id)
|
||||
tempDirPath := filepath.Join(rootDir, tempDirPrefix+id)
|
||||
|
||||
// Try to lock the lock file. If it can be locked, the directory is stale.
|
||||
instanceLock, err := staging_lockfile.TryLockPath(lockPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if rmErr := os.RemoveAll(tempDirPath); rmErr != nil && !os.IsNotExist(rmErr) {
|
||||
recoveryErrors = append(recoveryErrors, fmt.Errorf("error removing stale temp dir %s: %w", tempDirPath, rmErr))
|
||||
}
|
||||
if unlockErr := instanceLock.UnlockAndDelete(); unlockErr != nil {
|
||||
recoveryErrors = append(recoveryErrors, fmt.Errorf("error unlocking and deleting stale lock file %s: %w", lockPath, unlockErr))
|
||||
}
|
||||
}
|
||||
|
||||
return errors.Join(recoveryErrors...)
|
||||
}
|
||||
|
||||
// NewTempDir creates a TempDir and immediately creates both the temporary directory
|
||||
// and its corresponding lock file in the specified RootDir.
|
||||
// The RootDir itself will be created if it doesn't exist.
|
||||
// Note: The caller MUST ensure that returned TempDir instance is cleaned up with .Cleanup().
|
||||
func NewTempDir(rootDir string) (*TempDir, error) {
|
||||
if err := os.MkdirAll(rootDir, 0o700); err != nil {
|
||||
return nil, fmt.Errorf("creating root temp directory %s failed: %w", rootDir, err)
|
||||
}
|
||||
|
||||
td := &TempDir{
|
||||
RootDir: rootDir,
|
||||
}
|
||||
tempDirLock, tempDirLockFileName, err := staging_lockfile.CreateAndLock(td.RootDir, tempdirLockPrefix)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating and locking temp dir instance lock in %s failed: %w", td.RootDir, err)
|
||||
}
|
||||
td.tempDirLock = tempDirLock
|
||||
td.tempDirLockPath = filepath.Join(td.RootDir, tempDirLockFileName)
|
||||
|
||||
// Create the temporary directory that corresponds to the lock file
|
||||
id := strings.TrimPrefix(tempDirLockFileName, tempdirLockPrefix)
|
||||
actualTempDirPath := filepath.Join(td.RootDir, tempDirPrefix+id)
|
||||
if err := os.MkdirAll(actualTempDirPath, 0o700); err != nil {
|
||||
return nil, fmt.Errorf("creating temp directory %s failed: %w", actualTempDirPath, err)
|
||||
}
|
||||
td.tempDirPath = actualTempDirPath
|
||||
td.counter = 0
|
||||
return td, nil
|
||||
}
|
||||
|
||||
// StageDeletion moves the specified file into the instance's temporary directory.
|
||||
// The temporary directory must already exist (created during NewTempDir).
|
||||
// Files are renamed with a counter-based prefix (e.g., "0-filename", "1-filename") to ensure uniqueness.
|
||||
// Note: 'path' must be on the same filesystem as the TempDir for os.Rename to work.
|
||||
// The caller MUST ensure .Cleanup() is called.
|
||||
// If the TempDir has been cleaned up, this method will return an error.
|
||||
func (td *TempDir) StageDeletion(path string) error {
|
||||
if td.tempDirLock == nil {
|
||||
return fmt.Errorf("temp dir instance not initialized or already cleaned up")
|
||||
}
|
||||
fileName := fmt.Sprintf("%d-", td.counter) + filepath.Base(path)
|
||||
destPath := filepath.Join(td.tempDirPath, fileName)
|
||||
td.counter++
|
||||
return os.Rename(path, destPath)
|
||||
}
|
||||
|
||||
// Cleanup removes the temporary directory and releases its instance lock.
|
||||
// After cleanup, the TempDir instance becomes inactive and cannot be reused.
|
||||
// Subsequent calls to StageDeletion() will fail.
|
||||
// Multiple calls to Cleanup() are safe and will not return an error.
|
||||
// Callers should typically defer Cleanup() to run after any application-level
|
||||
// global locks are released to avoid holding those locks during potentially
|
||||
// slow disk I/O.
|
||||
func (td *TempDir) Cleanup() error {
|
||||
if td.tempDirLock == nil {
|
||||
logrus.Debug("Temp dir already cleaned up")
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := os.RemoveAll(td.tempDirPath); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("removing temp dir %s failed: %w", td.tempDirPath, err)
|
||||
}
|
||||
|
||||
lock := td.tempDirLock
|
||||
td.tempDirPath = ""
|
||||
td.tempDirLock = nil
|
||||
td.tempDirLockPath = ""
|
||||
return lock.UnlockAndDelete()
|
||||
}
|
||||
|
||||
// CleanupTemporaryDirectories cleans up multiple temporary directories by calling their cleanup functions.
|
||||
func CleanupTemporaryDirectories(cleanFuncs ...CleanupTempDirFunc) error {
|
||||
var cleanupErrors []error
|
||||
for _, cleanupFunc := range cleanFuncs {
|
||||
if cleanupFunc == nil {
|
||||
continue
|
||||
}
|
||||
if err := cleanupFunc(); err != nil {
|
||||
cleanupErrors = append(cleanupErrors, err)
|
||||
}
|
||||
}
|
||||
return errors.Join(cleanupErrors...)
|
||||
}
|
||||
106
vendor/github.com/containers/storage/layers.go
generated
vendored
106
vendor/github.com/containers/storage/layers.go
generated
vendored
|
|
@ -18,6 +18,7 @@ import (
|
|||
"time"
|
||||
|
||||
drivers "github.com/containers/storage/drivers"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
"github.com/containers/storage/pkg/ioutils"
|
||||
|
|
@ -38,6 +39,8 @@ import (
|
|||
|
||||
const (
|
||||
tarSplitSuffix = ".tar-split.gz"
|
||||
// tempDirPath is the subdirectory name used for storing temporary directories during layer deletion
|
||||
tempDirPath = "tmp"
|
||||
incompleteFlag = "incomplete"
|
||||
// maxLayerStoreCleanupIterations is the number of times we try to clean up inconsistent layer store state
|
||||
// in readers (which, for implementation reasons, gives other writers the opportunity to create more inconsistent state)
|
||||
|
|
@ -290,8 +293,14 @@ type rwLayerStore interface {
|
|||
// updateNames modifies names associated with a layer based on (op, names).
|
||||
updateNames(id string, names []string, op updateNameOperation) error
|
||||
|
||||
// Delete deletes a layer with the specified name or ID.
|
||||
Delete(id string) error
|
||||
// deleteWhileHoldingLock deletes a layer with the specified name or ID.
|
||||
deleteWhileHoldingLock(id string) error
|
||||
|
||||
// deferredDelete deletes a layer with the specified name or ID.
|
||||
// This removal happen immediately (the layer is no longer usable),
|
||||
// but physically deleting the files may be deferred.
|
||||
// Caller MUST call all returned cleanup functions outside of the locks.
|
||||
deferredDelete(id string) ([]tempdir.CleanupTempDirFunc, error)
|
||||
|
||||
// Wipe deletes all layers.
|
||||
Wipe() error
|
||||
|
|
@ -794,6 +803,17 @@ func (r *layerStore) load(lockedForWriting bool) (bool, error) {
|
|||
layers := []*Layer{}
|
||||
ids := make(map[string]*Layer)
|
||||
|
||||
if r.lockfile.IsReadWrite() {
|
||||
if err := tempdir.RecoverStaleDirs(filepath.Join(r.layerdir, tempDirPath)); err != nil {
|
||||
return false, err
|
||||
}
|
||||
for _, driverTempDirPath := range r.driver.GetTempDirRootDirs() {
|
||||
if err := tempdir.RecoverStaleDirs(driverTempDirPath); err != nil {
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for locationIndex := range numLayerLocationIndex {
|
||||
location := layerLocationFromIndex(locationIndex)
|
||||
rpath := r.jsonPath[locationIndex]
|
||||
|
|
@ -935,7 +955,12 @@ func (r *layerStore) load(lockedForWriting bool) (bool, error) {
|
|||
// Now actually delete the layers
|
||||
for _, layer := range layersToDelete {
|
||||
logrus.Warnf("Found incomplete layer %q, deleting it", layer.ID)
|
||||
err := r.deleteInternal(layer.ID)
|
||||
cleanFunctions, err := r.internalDelete(layer.ID)
|
||||
defer func() {
|
||||
if err := tempdir.CleanupTemporaryDirectories(cleanFunctions...); err != nil {
|
||||
logrus.Errorf("Error cleaning up temporary directories: %v", err)
|
||||
}
|
||||
}()
|
||||
if err != nil {
|
||||
// Don't return the error immediately, because deleteInternal does not saveLayers();
|
||||
// Even if deleting one incomplete layer fails, call saveLayers() so that other possible successfully
|
||||
|
|
@ -1334,7 +1359,7 @@ func (r *layerStore) PutAdditionalLayer(id string, parentLayer *Layer, names []s
|
|||
r.bytocsum[layer.TOCDigest] = append(r.bytocsum[layer.TOCDigest], layer.ID)
|
||||
}
|
||||
if err := r.saveFor(layer); err != nil {
|
||||
if e := r.Delete(layer.ID); e != nil {
|
||||
if e := r.deleteWhileHoldingLock(layer.ID); e != nil {
|
||||
logrus.Errorf("While recovering from a failure to save layers, error deleting layer %#v: %v", id, e)
|
||||
}
|
||||
return nil, err
|
||||
|
|
@ -1469,7 +1494,7 @@ func (r *layerStore) create(id string, parentLayer *Layer, names []string, mount
|
|||
if cleanupFailureContext == "" {
|
||||
cleanupFailureContext = "unknown: cleanupFailureContext not set at the failure site"
|
||||
}
|
||||
if e := r.Delete(id); e != nil {
|
||||
if e := r.deleteWhileHoldingLock(id); e != nil {
|
||||
logrus.Errorf("While recovering from a failure (%s), error deleting layer %#v: %v", cleanupFailureContext, id, e)
|
||||
}
|
||||
}
|
||||
|
|
@ -1634,7 +1659,7 @@ func (r *layerStore) Mount(id string, options drivers.MountOpts) (string, error)
|
|||
options.MountLabel = layer.MountLabel
|
||||
}
|
||||
|
||||
if (options.UidMaps != nil || options.GidMaps != nil) && !r.driver.SupportsShifting() {
|
||||
if (options.UidMaps != nil || options.GidMaps != nil) && !r.driver.SupportsShifting(options.UidMaps, options.GidMaps) {
|
||||
if !reflect.DeepEqual(options.UidMaps, layer.UIDMap) || !reflect.DeepEqual(options.GidMaps, layer.GIDMap) {
|
||||
return "", fmt.Errorf("cannot mount layer %v: shifting not enabled", layer.ID)
|
||||
}
|
||||
|
|
@ -1920,13 +1945,15 @@ func layerHasIncompleteFlag(layer *Layer) bool {
|
|||
}
|
||||
|
||||
// Requires startWriting.
|
||||
func (r *layerStore) deleteInternal(id string) error {
|
||||
// Caller MUST run all returned cleanup functions after this, EVEN IF the function returns an error.
|
||||
// Ideally outside of the startWriting.
|
||||
func (r *layerStore) internalDelete(id string) ([]tempdir.CleanupTempDirFunc, error) {
|
||||
if !r.lockfile.IsReadWrite() {
|
||||
return fmt.Errorf("not allowed to delete layers at %q: %w", r.layerdir, ErrStoreIsReadOnly)
|
||||
return nil, fmt.Errorf("not allowed to delete layers at %q: %w", r.layerdir, ErrStoreIsReadOnly)
|
||||
}
|
||||
layer, ok := r.lookup(id)
|
||||
if !ok {
|
||||
return ErrLayerUnknown
|
||||
return nil, ErrLayerUnknown
|
||||
}
|
||||
// Ensure that if we are interrupted, the layer will be cleaned up.
|
||||
if !layerHasIncompleteFlag(layer) {
|
||||
|
|
@ -1935,16 +1962,30 @@ func (r *layerStore) deleteInternal(id string) error {
|
|||
}
|
||||
layer.Flags[incompleteFlag] = true
|
||||
if err := r.saveFor(layer); err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// We never unset incompleteFlag; below, we remove the entire object from r.layers.
|
||||
id = layer.ID
|
||||
if err := r.driver.Remove(id); err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return err
|
||||
tempDirectory, err := tempdir.NewTempDir(filepath.Join(r.layerdir, tempDirPath))
|
||||
cleanFunctions := []tempdir.CleanupTempDirFunc{}
|
||||
cleanFunctions = append(cleanFunctions, tempDirectory.Cleanup)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
id = layer.ID
|
||||
cleanFunc, err := r.driver.DeferredRemove(id)
|
||||
cleanFunctions = append(cleanFunctions, cleanFunc)
|
||||
if err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return cleanFunctions, err
|
||||
}
|
||||
|
||||
cleanFunctions = append(cleanFunctions, tempDirectory.Cleanup)
|
||||
if err := tempDirectory.StageDeletion(r.tspath(id)); err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return cleanFunctions, err
|
||||
}
|
||||
if err := tempDirectory.StageDeletion(r.datadir(id)); err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return cleanFunctions, err
|
||||
}
|
||||
os.Remove(r.tspath(id))
|
||||
os.RemoveAll(r.datadir(id))
|
||||
delete(r.byid, id)
|
||||
for _, name := range layer.Names {
|
||||
delete(r.byname, name)
|
||||
|
|
@ -1968,7 +2009,7 @@ func (r *layerStore) deleteInternal(id string) error {
|
|||
}) {
|
||||
selinux.ReleaseLabel(mountLabel)
|
||||
}
|
||||
return nil
|
||||
return cleanFunctions, nil
|
||||
}
|
||||
|
||||
// Requires startWriting.
|
||||
|
|
@ -1988,10 +2029,20 @@ func (r *layerStore) deleteInDigestMap(id string) {
|
|||
}
|
||||
|
||||
// Requires startWriting.
|
||||
func (r *layerStore) Delete(id string) error {
|
||||
// This is soft-deprecated and should not have any new callers; use deferredDelete instead.
|
||||
func (r *layerStore) deleteWhileHoldingLock(id string) error {
|
||||
cleanupFunctions, deferErr := r.deferredDelete(id)
|
||||
cleanupErr := tempdir.CleanupTemporaryDirectories(cleanupFunctions...)
|
||||
return errors.Join(deferErr, cleanupErr)
|
||||
}
|
||||
|
||||
// Requires startWriting.
|
||||
// Caller MUST run all returned cleanup functions after this, EVEN IF the function returns an error.
|
||||
// Ideally outside of the startWriting.
|
||||
func (r *layerStore) deferredDelete(id string) ([]tempdir.CleanupTempDirFunc, error) {
|
||||
layer, ok := r.lookup(id)
|
||||
if !ok {
|
||||
return ErrLayerUnknown
|
||||
return nil, ErrLayerUnknown
|
||||
}
|
||||
id = layer.ID
|
||||
// The layer may already have been explicitly unmounted, but if not, we
|
||||
|
|
@ -2003,13 +2054,14 @@ func (r *layerStore) Delete(id string) error {
|
|||
break
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if err := r.deleteInternal(id); err != nil {
|
||||
return err
|
||||
cleanFunctions, err := r.internalDelete(id)
|
||||
if err != nil {
|
||||
return cleanFunctions, err
|
||||
}
|
||||
return r.saveFor(layer)
|
||||
return cleanFunctions, r.saveFor(layer)
|
||||
}
|
||||
|
||||
// Requires startReading or startWriting.
|
||||
|
|
@ -2039,7 +2091,7 @@ func (r *layerStore) Wipe() error {
|
|||
return r.byid[ids[i]].Created.After(r.byid[ids[j]].Created)
|
||||
})
|
||||
for _, id := range ids {
|
||||
if err := r.Delete(id); err != nil {
|
||||
if err := r.deleteWhileHoldingLock(id); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -2550,10 +2602,14 @@ func (r *layerStore) applyDiffFromStagingDirectory(id string, diffOutput *driver
|
|||
if err != nil {
|
||||
compressor = pgzip.NewWriter(&tsdata)
|
||||
}
|
||||
if _, err := diffOutput.TarSplit.Seek(0, io.SeekStart); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := compressor.SetConcurrency(1024*1024, 1); err != nil { // 1024*1024 is the hard-coded default; we're not changing that
|
||||
logrus.Infof("setting compression concurrency threads to 1: %v; ignoring", err)
|
||||
}
|
||||
if _, err := compressor.Write(diffOutput.TarSplit); err != nil {
|
||||
if _, err := diffOutput.TarSplit.WriteTo(compressor); err != nil {
|
||||
compressor.Close()
|
||||
return err
|
||||
}
|
||||
|
|
@ -2567,7 +2623,7 @@ func (r *layerStore) applyDiffFromStagingDirectory(id string, diffOutput *driver
|
|||
}
|
||||
for k, v := range diffOutput.BigData {
|
||||
if err := r.SetBigData(id, k, bytes.NewReader(v)); err != nil {
|
||||
if err2 := r.Delete(id); err2 != nil {
|
||||
if err2 := r.deleteWhileHoldingLock(id); err2 != nil {
|
||||
logrus.Errorf("While recovering from a failure to set big data, error deleting layer %#v: %v", id, err2)
|
||||
}
|
||||
return err
|
||||
|
|
|
|||
414
vendor/github.com/containers/storage/pkg/archive/archive.go
generated
vendored
414
vendor/github.com/containers/storage/pkg/archive/archive.go
generated
vendored
|
|
@ -528,11 +528,29 @@ func canonicalTarName(name string, isDir bool) (string, error) {
|
|||
return name, nil
|
||||
}
|
||||
|
||||
// addFile adds a file from `path` as `name` to the tar archive.
|
||||
func (ta *tarWriter) addFile(path, name string) error {
|
||||
type addFileData struct {
|
||||
// The path from which to read contents.
|
||||
path string
|
||||
|
||||
// os.Stat for the above.
|
||||
fi os.FileInfo
|
||||
|
||||
// The file header of the above.
|
||||
hdr *tar.Header
|
||||
|
||||
// if present, an extra whiteout entry to write after the header.
|
||||
extraWhiteout *tar.Header
|
||||
}
|
||||
|
||||
// prepareAddFile generates the tar file header(s) for adding a file
|
||||
// from path as name to the tar archive, without writing to the
|
||||
// tar stream. Thus, any error may be ignored without corrupting the
|
||||
// tar file. A (nil, nil) return means that the file should be
|
||||
// ignored for non-error reasons.
|
||||
func (ta *tarWriter) prepareAddFile(path, name string) (*addFileData, error) {
|
||||
fi, err := os.Lstat(path)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var link string
|
||||
|
|
@ -540,26 +558,26 @@ func (ta *tarWriter) addFile(path, name string) error {
|
|||
var err error
|
||||
link, err = os.Readlink(path)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if fi.Mode()&os.ModeSocket != 0 {
|
||||
logrus.Infof("archive: skipping %q since it is a socket", path)
|
||||
return nil
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
hdr, err := FileInfoHeader(name, fi, link)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
if err := readSecurityXattrToTarHeader(path, hdr); err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
if err := readUserXattrToTarHeader(path, hdr); err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
if err := ReadFileFlagsToTarHeader(path, hdr); err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
if ta.CopyPass {
|
||||
copyPassHeader(hdr)
|
||||
|
|
@ -568,18 +586,13 @@ func (ta *tarWriter) addFile(path, name string) error {
|
|||
// if it's not a directory and has more than 1 link,
|
||||
// it's hard linked, so set the type flag accordingly
|
||||
if !fi.IsDir() && hasHardlinks(fi) {
|
||||
inode, err := getInodeFromStat(fi.Sys())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
inode := getInodeFromStat(fi.Sys())
|
||||
// a link should have a name that it links too
|
||||
// and that linked name should be first in the tar archive
|
||||
if oldpath, ok := ta.SeenFiles[inode]; ok {
|
||||
hdr.Typeflag = tar.TypeLink
|
||||
hdr.Linkname = oldpath
|
||||
hdr.Size = 0 // This Must be here for the writer math to add up!
|
||||
} else {
|
||||
ta.SeenFiles[inode] = name
|
||||
hdr.Size = 0 // This must be here for the writer math to add up!
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -589,11 +602,11 @@ func (ta *tarWriter) addFile(path, name string) error {
|
|||
if !strings.HasPrefix(filepath.Base(hdr.Name), WhiteoutPrefix) && !ta.IDMappings.Empty() {
|
||||
fileIDPair, err := getFileUIDGID(fi.Sys())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
hdr.Uid, hdr.Gid, err = ta.IDMappings.ToContainer(fileIDPair)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -616,26 +629,48 @@ func (ta *tarWriter) addFile(path, name string) error {
|
|||
|
||||
maybeTruncateHeaderModTime(hdr)
|
||||
|
||||
result := &addFileData{
|
||||
path: path,
|
||||
hdr: hdr,
|
||||
fi: fi,
|
||||
}
|
||||
if ta.WhiteoutConverter != nil {
|
||||
wo, err := ta.WhiteoutConverter.ConvertWrite(hdr, path, fi)
|
||||
// The WhiteoutConverter suggests a generic mechanism,
|
||||
// but this code is only used to convert between
|
||||
// overlayfs (on-disk) and AUFS (in the tar file)
|
||||
// whiteouts, and is initiated because the overlayfs
|
||||
// storage driver returns OverlayWhiteoutFormat from
|
||||
// Driver.getWhiteoutFormat().
|
||||
//
|
||||
// For AUFS, a directory with all its contents deleted
|
||||
// should be represented as a directory containing a
|
||||
// magic whiteout empty regular file, hence the
|
||||
// extraWhiteout header returned here.
|
||||
result.extraWhiteout, err = ta.WhiteoutConverter.ConvertWrite(hdr, path, fi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// addFile performs the write. An error here corrupts the tar file.
|
||||
func (ta *tarWriter) addFile(headers *addFileData) error {
|
||||
hdr := headers.hdr
|
||||
if headers.extraWhiteout != nil {
|
||||
if hdr.Typeflag == tar.TypeReg && hdr.Size > 0 {
|
||||
// If we write hdr with hdr.Size > 0, we have
|
||||
// to write the body before we can write the
|
||||
// extraWhiteout header. This can only happen
|
||||
// if the contract for WhiteoutConverter is
|
||||
// not honored, so bail out.
|
||||
return fmt.Errorf("tar: cannot use extra whiteout with non-empty file %s", hdr.Name)
|
||||
}
|
||||
if err := ta.TarWriter.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// If a new whiteout file exists, write original hdr, then
|
||||
// replace hdr with wo to be written after. Whiteouts should
|
||||
// always be written after the original. Note the original
|
||||
// hdr may have been updated to be a whiteout with returning
|
||||
// a whiteout header
|
||||
if wo != nil {
|
||||
if err := ta.TarWriter.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
if hdr.Typeflag == tar.TypeReg && hdr.Size > 0 {
|
||||
return fmt.Errorf("tar: cannot use whiteout for non-empty file")
|
||||
}
|
||||
hdr = wo
|
||||
}
|
||||
hdr = headers.extraWhiteout
|
||||
}
|
||||
|
||||
if err := ta.TarWriter.WriteHeader(hdr); err != nil {
|
||||
|
|
@ -643,7 +678,7 @@ func (ta *tarWriter) addFile(path, name string) error {
|
|||
}
|
||||
|
||||
if hdr.Typeflag == tar.TypeReg && hdr.Size > 0 {
|
||||
file, err := os.Open(path)
|
||||
file, err := os.Open(headers.path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -661,6 +696,10 @@ func (ta *tarWriter) addFile(path, name string) error {
|
|||
}
|
||||
}
|
||||
|
||||
if !headers.fi.IsDir() && hasHardlinks(headers.fi) {
|
||||
ta.SeenFiles[getInodeFromStat(headers.fi.Sys())] = headers.hdr.Name
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -853,184 +892,189 @@ func extractTarFileEntry(path, extractDir string, hdr *tar.Header, reader io.Rea
|
|||
}
|
||||
|
||||
// Tar creates an archive from the directory at `path`, and returns it as a
|
||||
// stream of bytes.
|
||||
// stream of bytes. This is a convenience wrapper for [TarWithOptions].
|
||||
func Tar(path string, compression Compression) (io.ReadCloser, error) {
|
||||
return TarWithOptions(path, &TarOptions{Compression: compression})
|
||||
}
|
||||
|
||||
// TarWithOptions creates an archive from the directory at `path`, only including files whose relative
|
||||
// paths are included in `options.IncludeFiles` (if non-nil) or not in `options.ExcludePatterns`.
|
||||
func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error) {
|
||||
tarWithOptionsTo := func(dest io.WriteCloser, srcPath string, options *TarOptions) (result error) {
|
||||
// Fix the source path to work with long path names. This is a no-op
|
||||
// on platforms other than Windows.
|
||||
srcPath = fixVolumePathPrefix(srcPath)
|
||||
defer func() {
|
||||
if err := dest.Close(); err != nil && result == nil {
|
||||
result = err
|
||||
}
|
||||
}()
|
||||
func tarWithOptionsTo(dest io.WriteCloser, srcPath string, options *TarOptions) (result error) {
|
||||
// Fix the source path to work with long path names. This is a no-op
|
||||
// on platforms other than Windows.
|
||||
srcPath = fixVolumePathPrefix(srcPath)
|
||||
defer func() {
|
||||
if err := dest.Close(); err != nil && result == nil {
|
||||
result = err
|
||||
}
|
||||
}()
|
||||
|
||||
pm, err := fileutils.NewPatternMatcher(options.ExcludePatterns)
|
||||
if err != nil {
|
||||
return err
|
||||
pm, err := fileutils.NewPatternMatcher(options.ExcludePatterns)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
compressWriter, err := CompressStream(dest, options.Compression)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ta := newTarWriter(
|
||||
idtools.NewIDMappingsFromMaps(options.UIDMaps, options.GIDMaps),
|
||||
compressWriter,
|
||||
options.ChownOpts,
|
||||
options.Timestamp,
|
||||
)
|
||||
ta.WhiteoutConverter = GetWhiteoutConverter(options.WhiteoutFormat, options.WhiteoutData)
|
||||
ta.CopyPass = options.CopyPass
|
||||
|
||||
includeFiles := options.IncludeFiles
|
||||
defer func() {
|
||||
if err := compressWriter.Close(); err != nil && result == nil {
|
||||
result = err
|
||||
}
|
||||
}()
|
||||
|
||||
// this buffer is needed for the duration of this piped stream
|
||||
defer pools.BufioWriter32KPool.Put(ta.Buffer)
|
||||
|
||||
// In general we log errors here but ignore them because
|
||||
// during e.g. a diff operation the container can continue
|
||||
// mutating the filesystem and we can see transient errors
|
||||
// from this
|
||||
|
||||
stat, err := os.Lstat(srcPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !stat.IsDir() {
|
||||
// We can't later join a non-dir with any includes because the
|
||||
// 'walk' will error if "file/." is stat-ed and "file" is not a
|
||||
// directory. So, we must split the source path and use the
|
||||
// basename as the include.
|
||||
if len(includeFiles) > 0 {
|
||||
logrus.Warn("Tar: Can't archive a file with includes")
|
||||
}
|
||||
|
||||
compressWriter, err := CompressStream(dest, options.Compression)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dir, base := SplitPathDirEntry(srcPath)
|
||||
srcPath = dir
|
||||
includeFiles = []string{base}
|
||||
}
|
||||
|
||||
ta := newTarWriter(
|
||||
idtools.NewIDMappingsFromMaps(options.UIDMaps, options.GIDMaps),
|
||||
compressWriter,
|
||||
options.ChownOpts,
|
||||
options.Timestamp,
|
||||
)
|
||||
ta.WhiteoutConverter = GetWhiteoutConverter(options.WhiteoutFormat, options.WhiteoutData)
|
||||
ta.CopyPass = options.CopyPass
|
||||
if len(includeFiles) == 0 {
|
||||
includeFiles = []string{"."}
|
||||
}
|
||||
|
||||
includeFiles := options.IncludeFiles
|
||||
defer func() {
|
||||
if err := compressWriter.Close(); err != nil && result == nil {
|
||||
result = err
|
||||
}
|
||||
}()
|
||||
seen := make(map[string]bool)
|
||||
|
||||
// this buffer is needed for the duration of this piped stream
|
||||
defer pools.BufioWriter32KPool.Put(ta.Buffer)
|
||||
for _, include := range includeFiles {
|
||||
rebaseName := options.RebaseNames[include]
|
||||
|
||||
// In general we log errors here but ignore them because
|
||||
// during e.g. a diff operation the container can continue
|
||||
// mutating the filesystem and we can see transient errors
|
||||
// from this
|
||||
|
||||
stat, err := os.Lstat(srcPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !stat.IsDir() {
|
||||
// We can't later join a non-dir with any includes because the
|
||||
// 'walk' will error if "file/." is stat-ed and "file" is not a
|
||||
// directory. So, we must split the source path and use the
|
||||
// basename as the include.
|
||||
if len(includeFiles) > 0 {
|
||||
logrus.Warn("Tar: Can't archive a file with includes")
|
||||
walkRoot := getWalkRoot(srcPath, include)
|
||||
if err := filepath.WalkDir(walkRoot, func(filePath string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
logrus.Errorf("Tar: Can't stat file %s to tar: %s", srcPath, err)
|
||||
return nil
|
||||
}
|
||||
|
||||
dir, base := SplitPathDirEntry(srcPath)
|
||||
srcPath = dir
|
||||
includeFiles = []string{base}
|
||||
}
|
||||
relFilePath, err := filepath.Rel(srcPath, filePath)
|
||||
if err != nil || (!options.IncludeSourceDir && relFilePath == "." && d.IsDir()) {
|
||||
// Error getting relative path OR we are looking
|
||||
// at the source directory path. Skip in both situations.
|
||||
return nil //nolint: nilerr
|
||||
}
|
||||
|
||||
if len(includeFiles) == 0 {
|
||||
includeFiles = []string{"."}
|
||||
}
|
||||
if options.IncludeSourceDir && include == "." && relFilePath != "." {
|
||||
relFilePath = strings.Join([]string{".", relFilePath}, string(filepath.Separator))
|
||||
}
|
||||
|
||||
seen := make(map[string]bool)
|
||||
skip := false
|
||||
|
||||
for _, include := range includeFiles {
|
||||
rebaseName := options.RebaseNames[include]
|
||||
|
||||
walkRoot := getWalkRoot(srcPath, include)
|
||||
if err := filepath.WalkDir(walkRoot, func(filePath string, d fs.DirEntry, err error) error {
|
||||
// If "include" is an exact match for the current file
|
||||
// then even if there's an "excludePatterns" pattern that
|
||||
// matches it, don't skip it. IOW, assume an explicit 'include'
|
||||
// is asking for that file no matter what - which is true
|
||||
// for some files, like .dockerignore and Dockerfile (sometimes)
|
||||
if include != relFilePath {
|
||||
matches, err := pm.IsMatch(relFilePath)
|
||||
if err != nil {
|
||||
logrus.Errorf("Tar: Can't stat file %s to tar: %s", srcPath, err)
|
||||
return fmt.Errorf("matching %s: %w", relFilePath, err)
|
||||
}
|
||||
skip = matches
|
||||
}
|
||||
|
||||
if skip {
|
||||
// If we want to skip this file and its a directory
|
||||
// then we should first check to see if there's an
|
||||
// excludes pattern (e.g. !dir/file) that starts with this
|
||||
// dir. If so then we can't skip this dir.
|
||||
|
||||
// Its not a dir then so we can just return/skip.
|
||||
if !d.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
relFilePath, err := filepath.Rel(srcPath, filePath)
|
||||
if err != nil || (!options.IncludeSourceDir && relFilePath == "." && d.IsDir()) {
|
||||
// Error getting relative path OR we are looking
|
||||
// at the source directory path. Skip in both situations.
|
||||
return nil //nolint: nilerr
|
||||
}
|
||||
|
||||
if options.IncludeSourceDir && include == "." && relFilePath != "." {
|
||||
relFilePath = strings.Join([]string{".", relFilePath}, string(filepath.Separator))
|
||||
}
|
||||
|
||||
skip := false
|
||||
|
||||
// If "include" is an exact match for the current file
|
||||
// then even if there's an "excludePatterns" pattern that
|
||||
// matches it, don't skip it. IOW, assume an explicit 'include'
|
||||
// is asking for that file no matter what - which is true
|
||||
// for some files, like .dockerignore and Dockerfile (sometimes)
|
||||
if include != relFilePath {
|
||||
matches, err := pm.IsMatch(relFilePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("matching %s: %w", relFilePath, err)
|
||||
}
|
||||
skip = matches
|
||||
}
|
||||
|
||||
if skip {
|
||||
// If we want to skip this file and its a directory
|
||||
// then we should first check to see if there's an
|
||||
// excludes pattern (e.g. !dir/file) that starts with this
|
||||
// dir. If so then we can't skip this dir.
|
||||
|
||||
// Its not a dir then so we can just return/skip.
|
||||
if !d.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// No exceptions (!...) in patterns so just skip dir
|
||||
if !pm.Exclusions() {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
|
||||
dirSlash := relFilePath + string(filepath.Separator)
|
||||
|
||||
for _, pat := range pm.Patterns() {
|
||||
if !pat.Exclusion() {
|
||||
continue
|
||||
}
|
||||
if strings.HasPrefix(pat.String()+string(filepath.Separator), dirSlash) {
|
||||
// found a match - so can't skip this dir
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// No matching exclusion dir so just skip dir
|
||||
// No exceptions (!...) in patterns so just skip dir
|
||||
if !pm.Exclusions() {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
|
||||
if seen[relFilePath] {
|
||||
return nil
|
||||
}
|
||||
seen[relFilePath] = true
|
||||
dirSlash := relFilePath + string(filepath.Separator)
|
||||
|
||||
// Rename the base resource.
|
||||
if rebaseName != "" {
|
||||
var replacement string
|
||||
if rebaseName != string(filepath.Separator) {
|
||||
// Special case the root directory to replace with an
|
||||
// empty string instead so that we don't end up with
|
||||
// double slashes in the paths.
|
||||
replacement = rebaseName
|
||||
for _, pat := range pm.Patterns() {
|
||||
if !pat.Exclusion() {
|
||||
continue
|
||||
}
|
||||
|
||||
relFilePath = strings.Replace(relFilePath, include, replacement, 1)
|
||||
}
|
||||
|
||||
if err := ta.addFile(filePath, relFilePath); err != nil {
|
||||
logrus.Errorf("Can't add file %s to tar: %s", filePath, err)
|
||||
// if pipe is broken, stop writing tar stream to it
|
||||
if err == io.ErrClosedPipe {
|
||||
return err
|
||||
if strings.HasPrefix(pat.String()+string(filepath.Separator), dirSlash) {
|
||||
// found a match - so can't skip this dir
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}); err != nil {
|
||||
return err
|
||||
|
||||
// No matching exclusion dir so just skip dir
|
||||
return filepath.SkipDir
|
||||
}
|
||||
}
|
||||
return ta.TarWriter.Close()
|
||||
}
|
||||
|
||||
if seen[relFilePath] {
|
||||
return nil
|
||||
}
|
||||
seen[relFilePath] = true
|
||||
|
||||
// Rename the base resource.
|
||||
if rebaseName != "" {
|
||||
var replacement string
|
||||
if rebaseName != string(filepath.Separator) {
|
||||
// Special case the root directory to replace with an
|
||||
// empty string instead so that we don't end up with
|
||||
// double slashes in the paths.
|
||||
replacement = rebaseName
|
||||
}
|
||||
|
||||
relFilePath = strings.Replace(relFilePath, include, replacement, 1)
|
||||
}
|
||||
|
||||
headers, err := ta.prepareAddFile(filePath, relFilePath)
|
||||
if err != nil {
|
||||
logrus.Errorf("Can't add file %s to tar: %s; skipping", filePath, err)
|
||||
} else if headers != nil {
|
||||
if err := ta.addFile(headers); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return ta.TarWriter.Close()
|
||||
}
|
||||
|
||||
// TarWithOptions creates an archive from the directory at `path`, only including files whose relative
|
||||
// paths are included in `options.IncludeFiles` (if non-nil) or not in `options.ExcludePatterns`.
|
||||
//
|
||||
// If used on a file system being modified concurrently,
|
||||
// TarWithOptions will create a valid tar archive, but may leave out
|
||||
// some files.
|
||||
func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error) {
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
go func() {
|
||||
err := tarWithOptionsTo(pipeWriter, srcPath, options)
|
||||
|
|
@ -1446,7 +1490,7 @@ func NewTempArchive(src io.Reader, dir string) (*TempArchive, error) {
|
|||
if _, err := io.Copy(f, src); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, err := f.Seek(0, 0); err != nil {
|
||||
if _, err := f.Seek(0, io.SeekStart); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
st, err := f.Stat()
|
||||
|
|
|
|||
2
vendor/github.com/containers/storage/pkg/archive/archive_unix.go
generated
vendored
2
vendor/github.com/containers/storage/pkg/archive/archive_unix.go
generated
vendored
|
|
@ -82,7 +82,7 @@ func setHeaderForSpecialDevice(hdr *tar.Header, name string, stat any) (err erro
|
|||
return
|
||||
}
|
||||
|
||||
func getInodeFromStat(stat any) (inode uint64, err error) {
|
||||
func getInodeFromStat(stat any) (inode uint64) {
|
||||
s, ok := stat.(*syscall.Stat_t)
|
||||
|
||||
if ok {
|
||||
|
|
|
|||
2
vendor/github.com/containers/storage/pkg/archive/archive_windows.go
generated
vendored
2
vendor/github.com/containers/storage/pkg/archive/archive_windows.go
generated
vendored
|
|
@ -57,7 +57,7 @@ func setHeaderForSpecialDevice(hdr *tar.Header, name string, stat interface{}) (
|
|||
return
|
||||
}
|
||||
|
||||
func getInodeFromStat(stat interface{}) (inode uint64, err error) {
|
||||
func getInodeFromStat(stat interface{}) (inode uint64) {
|
||||
// do nothing. no notion of Inode in stat on Windows
|
||||
return
|
||||
}
|
||||
|
|
|
|||
8
vendor/github.com/containers/storage/pkg/archive/changes.go
generated
vendored
8
vendor/github.com/containers/storage/pkg/archive/changes.go
generated
vendored
|
|
@ -481,8 +481,14 @@ func ExportChanges(dir string, changes []Change, uidMaps, gidMaps []idtools.IDMa
|
|||
}
|
||||
} else {
|
||||
path := filepath.Join(dir, change.Path)
|
||||
if err := ta.addFile(path, change.Path[1:]); err != nil {
|
||||
headers, err := ta.prepareAddFile(path, change.Path[1:])
|
||||
if err != nil {
|
||||
logrus.Debugf("Can't add file %s to tar: %s", path, err)
|
||||
} else if headers != nil {
|
||||
if err := ta.addFile(headers); err != nil {
|
||||
writer.CloseWithError(err)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
101
vendor/github.com/containers/storage/pkg/chunked/compression_linux.go
generated
vendored
101
vendor/github.com/containers/storage/pkg/chunked/compression_linux.go
generated
vendored
|
|
@ -7,6 +7,7 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"maps"
|
||||
"os"
|
||||
"slices"
|
||||
"strconv"
|
||||
"time"
|
||||
|
|
@ -18,6 +19,7 @@ import (
|
|||
"github.com/vbatts/tar-split/archive/tar"
|
||||
"github.com/vbatts/tar-split/tar/asm"
|
||||
"github.com/vbatts/tar-split/tar/storage"
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
const (
|
||||
|
|
@ -157,10 +159,36 @@ func readEstargzChunkedManifest(blobStream ImageSourceSeekable, blobSize int64,
|
|||
return manifestUncompressed, tocOffset, nil
|
||||
}
|
||||
|
||||
func openTmpFile(tmpDir string) (*os.File, error) {
|
||||
file, err := os.OpenFile(tmpDir, unix.O_TMPFILE|unix.O_RDWR|unix.O_CLOEXEC|unix.O_EXCL, 0o600)
|
||||
if err == nil {
|
||||
return file, nil
|
||||
}
|
||||
return openTmpFileNoTmpFile(tmpDir)
|
||||
}
|
||||
|
||||
// openTmpFileNoTmpFile is a fallback used by openTmpFile when the underlying file system does not
|
||||
// support O_TMPFILE.
|
||||
func openTmpFileNoTmpFile(tmpDir string) (*os.File, error) {
|
||||
file, err := os.CreateTemp(tmpDir, ".tmpfile")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Unlink the file immediately so that only the open fd refers to it.
|
||||
_ = os.Remove(file.Name())
|
||||
return file, nil
|
||||
}
|
||||
|
||||
// readZstdChunkedManifest reads the zstd:chunked manifest from the seekable stream blobStream.
|
||||
// Returns (manifest blob, parsed manifest, tar-split blob or nil, manifest offset).
|
||||
// tmpDir is a directory where the tar-split temporary file is written to. The file is opened with
|
||||
// O_TMPFILE so that it is automatically removed when it is closed.
|
||||
// Returns (manifest blob, parsed manifest, tar-split file or nil, manifest offset).
|
||||
// The opened tar-split file’s position is unspecified.
|
||||
// It may return an error matching ErrFallbackToOrdinaryLayerDownload / errFallbackCanConvert.
|
||||
func readZstdChunkedManifest(blobStream ImageSourceSeekable, tocDigest digest.Digest, annotations map[string]string) (_ []byte, _ *minimal.TOC, _ []byte, _ int64, retErr error) {
|
||||
// The compressed parameter indicates whether the manifest and tar-split data are zstd-compressed
|
||||
// (true) or stored uncompressed (false). Uncompressed data is used only for an optimization to convert
|
||||
// a regular OCI layer to zstd:chunked when convert_images is set, and it is not used for distributed images.
|
||||
func readZstdChunkedManifest(tmpDir string, blobStream ImageSourceSeekable, tocDigest digest.Digest, annotations map[string]string, compressed bool) (_ []byte, _ *minimal.TOC, _ *os.File, _ int64, retErr error) {
|
||||
offsetMetadata := annotations[minimal.ManifestInfoKey]
|
||||
if offsetMetadata == "" {
|
||||
return nil, nil, nil, 0, fmt.Errorf("%q annotation missing", minimal.ManifestInfoKey)
|
||||
|
|
@ -236,7 +264,7 @@ func readZstdChunkedManifest(blobStream ImageSourceSeekable, tocDigest digest.Di
|
|||
return nil, nil, nil, 0, err
|
||||
}
|
||||
|
||||
decodedBlob, err := decodeAndValidateBlob(manifest, manifestLengthUncompressed, tocDigest.String())
|
||||
decodedBlob, err := decodeAndValidateBlob(manifest, manifestLengthUncompressed, tocDigest.String(), compressed)
|
||||
if err != nil {
|
||||
return nil, nil, nil, 0, fmt.Errorf("validating and decompressing TOC: %w", err)
|
||||
}
|
||||
|
|
@ -245,7 +273,7 @@ func readZstdChunkedManifest(blobStream ImageSourceSeekable, tocDigest digest.Di
|
|||
return nil, nil, nil, 0, fmt.Errorf("unmarshaling TOC: %w", err)
|
||||
}
|
||||
|
||||
var decodedTarSplit []byte = nil
|
||||
var decodedTarSplit *os.File
|
||||
if toc.TarSplitDigest != "" {
|
||||
if tarSplitChunk.Offset <= 0 {
|
||||
return nil, nil, nil, 0, fmt.Errorf("TOC requires a tar-split, but the %s annotation does not describe a position", minimal.TarSplitInfoKey)
|
||||
|
|
@ -254,8 +282,16 @@ func readZstdChunkedManifest(blobStream ImageSourceSeekable, tocDigest digest.Di
|
|||
if err != nil {
|
||||
return nil, nil, nil, 0, err
|
||||
}
|
||||
decodedTarSplit, err = decodeAndValidateBlob(tarSplit, tarSplitLengthUncompressed, toc.TarSplitDigest.String())
|
||||
decodedTarSplit, err = openTmpFile(tmpDir)
|
||||
if err != nil {
|
||||
return nil, nil, nil, 0, err
|
||||
}
|
||||
defer func() {
|
||||
if retErr != nil {
|
||||
decodedTarSplit.Close()
|
||||
}
|
||||
}()
|
||||
if err := decodeAndValidateBlobToStream(tarSplit, decodedTarSplit, toc.TarSplitDigest.String(), compressed); err != nil {
|
||||
return nil, nil, nil, 0, fmt.Errorf("validating and decompressing tar-split: %w", err)
|
||||
}
|
||||
// We use the TOC for creating on-disk files, but the tar-split for creating metadata
|
||||
|
|
@ -274,11 +310,11 @@ func readZstdChunkedManifest(blobStream ImageSourceSeekable, tocDigest digest.Di
|
|||
return nil, nil, nil, 0, err
|
||||
}
|
||||
}
|
||||
return decodedBlob, toc, decodedTarSplit, int64(manifestChunk.Offset), err
|
||||
return decodedBlob, toc, decodedTarSplit, int64(manifestChunk.Offset), nil
|
||||
}
|
||||
|
||||
// ensureTOCMatchesTarSplit validates that toc and tarSplit contain _exactly_ the same entries.
|
||||
func ensureTOCMatchesTarSplit(toc *minimal.TOC, tarSplit []byte) error {
|
||||
func ensureTOCMatchesTarSplit(toc *minimal.TOC, tarSplit *os.File) error {
|
||||
pendingFiles := map[string]*minimal.FileMetadata{} // Name -> an entry in toc.Entries
|
||||
for i := range toc.Entries {
|
||||
e := &toc.Entries[i]
|
||||
|
|
@ -290,7 +326,11 @@ func ensureTOCMatchesTarSplit(toc *minimal.TOC, tarSplit []byte) error {
|
|||
}
|
||||
}
|
||||
|
||||
unpacker := storage.NewJSONUnpacker(bytes.NewReader(tarSplit))
|
||||
if _, err := tarSplit.Seek(0, io.SeekStart); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
unpacker := storage.NewJSONUnpacker(tarSplit)
|
||||
if err := asm.IterateHeaders(unpacker, func(hdr *tar.Header) error {
|
||||
e, ok := pendingFiles[hdr.Name]
|
||||
if !ok {
|
||||
|
|
@ -320,10 +360,10 @@ func ensureTOCMatchesTarSplit(toc *minimal.TOC, tarSplit []byte) error {
|
|||
}
|
||||
|
||||
// tarSizeFromTarSplit computes the total tarball size, using only the tarSplit metadata
|
||||
func tarSizeFromTarSplit(tarSplit []byte) (int64, error) {
|
||||
func tarSizeFromTarSplit(tarSplit io.Reader) (int64, error) {
|
||||
var res int64 = 0
|
||||
|
||||
unpacker := storage.NewJSONUnpacker(bytes.NewReader(tarSplit))
|
||||
unpacker := storage.NewJSONUnpacker(tarSplit)
|
||||
for {
|
||||
entry, err := unpacker.Next()
|
||||
if err != nil {
|
||||
|
|
@ -433,22 +473,33 @@ func ensureFileMetadataAttributesMatch(a, b *minimal.FileMetadata) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func decodeAndValidateBlob(blob []byte, lengthUncompressed uint64, expectedCompressedChecksum string) ([]byte, error) {
|
||||
func validateBlob(blob []byte, expectedCompressedChecksum string) error {
|
||||
d, err := digest.Parse(expectedCompressedChecksum)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid digest %q: %w", expectedCompressedChecksum, err)
|
||||
return fmt.Errorf("invalid digest %q: %w", expectedCompressedChecksum, err)
|
||||
}
|
||||
|
||||
blobDigester := d.Algorithm().Digester()
|
||||
blobChecksum := blobDigester.Hash()
|
||||
if _, err := blobChecksum.Write(blob); err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
if blobDigester.Digest() != d {
|
||||
return nil, fmt.Errorf("invalid blob checksum, expected checksum %s, got %s", d, blobDigester.Digest())
|
||||
return fmt.Errorf("invalid blob checksum, expected checksum %s, got %s", d, blobDigester.Digest())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func decodeAndValidateBlob(blob []byte, lengthUncompressed uint64, expectedCompressedChecksum string, compressed bool) ([]byte, error) {
|
||||
if err := validateBlob(blob, expectedCompressedChecksum); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
decoder, err := zstd.NewReader(nil) //nolint:contextcheck
|
||||
if !compressed {
|
||||
return blob, nil
|
||||
}
|
||||
|
||||
decoder, err := zstd.NewReader(nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -457,3 +508,23 @@ func decodeAndValidateBlob(blob []byte, lengthUncompressed uint64, expectedCompr
|
|||
b := make([]byte, 0, lengthUncompressed)
|
||||
return decoder.DecodeAll(blob, b)
|
||||
}
|
||||
|
||||
func decodeAndValidateBlobToStream(blob []byte, w *os.File, expectedCompressedChecksum string, compressed bool) error {
|
||||
if err := validateBlob(blob, expectedCompressedChecksum); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !compressed {
|
||||
_, err := w.Write(blob)
|
||||
return err
|
||||
}
|
||||
|
||||
decoder, err := zstd.NewReader(bytes.NewReader(blob))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer decoder.Close()
|
||||
|
||||
_, err = decoder.WriteTo(w)
|
||||
return err
|
||||
}
|
||||
|
|
|
|||
65
vendor/github.com/containers/storage/pkg/chunked/compressor/compressor.go
generated
vendored
65
vendor/github.com/containers/storage/pkg/chunked/compressor/compressor.go
generated
vendored
|
|
@ -11,7 +11,6 @@ import (
|
|||
|
||||
"github.com/containers/storage/pkg/chunked/internal/minimal"
|
||||
"github.com/containers/storage/pkg/ioutils"
|
||||
"github.com/klauspost/compress/zstd"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/vbatts/tar-split/archive/tar"
|
||||
"github.com/vbatts/tar-split/tar/asm"
|
||||
|
|
@ -202,15 +201,15 @@ type tarSplitData struct {
|
|||
compressed *bytes.Buffer
|
||||
digester digest.Digester
|
||||
uncompressedCounter *ioutils.WriteCounter
|
||||
zstd *zstd.Encoder
|
||||
zstd minimal.ZstdWriter
|
||||
packer storage.Packer
|
||||
}
|
||||
|
||||
func newTarSplitData(level int) (*tarSplitData, error) {
|
||||
func newTarSplitData(createZstdWriter minimal.CreateZstdWriterFunc) (*tarSplitData, error) {
|
||||
compressed := bytes.NewBuffer(nil)
|
||||
digester := digest.Canonical.Digester()
|
||||
|
||||
zstdWriter, err := minimal.ZstdWriterWithLevel(io.MultiWriter(compressed, digester.Hash()), level)
|
||||
zstdWriter, err := createZstdWriter(io.MultiWriter(compressed, digester.Hash()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -227,11 +226,11 @@ func newTarSplitData(level int) (*tarSplitData, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
func writeZstdChunkedStream(destFile io.Writer, outMetadata map[string]string, reader io.Reader, level int) error {
|
||||
func writeZstdChunkedStream(destFile io.Writer, outMetadata map[string]string, reader io.Reader, createZstdWriter minimal.CreateZstdWriterFunc) error {
|
||||
// total written so far. Used to retrieve partial offsets in the file
|
||||
dest := ioutils.NewWriteCounter(destFile)
|
||||
|
||||
tarSplitData, err := newTarSplitData(level)
|
||||
tarSplitData, err := newTarSplitData(createZstdWriter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -251,7 +250,7 @@ func writeZstdChunkedStream(destFile io.Writer, outMetadata map[string]string, r
|
|||
|
||||
buf := make([]byte, 4096)
|
||||
|
||||
zstdWriter, err := minimal.ZstdWriterWithLevel(dest, level)
|
||||
zstdWriter, err := createZstdWriter(dest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -404,18 +403,11 @@ func writeZstdChunkedStream(destFile io.Writer, outMetadata map[string]string, r
|
|||
return err
|
||||
}
|
||||
|
||||
if err := zstdWriter.Flush(); err != nil {
|
||||
zstdWriter.Close()
|
||||
return err
|
||||
}
|
||||
if err := zstdWriter.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
zstdWriter = nil
|
||||
|
||||
if err := tarSplitData.zstd.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := tarSplitData.zstd.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -427,7 +419,7 @@ func writeZstdChunkedStream(destFile io.Writer, outMetadata map[string]string, r
|
|||
UncompressedSize: tarSplitData.uncompressedCounter.Count,
|
||||
}
|
||||
|
||||
return minimal.WriteZstdChunkedManifest(dest, outMetadata, uint64(dest.Count), &ts, metadata, level)
|
||||
return minimal.WriteZstdChunkedManifest(dest, outMetadata, uint64(dest.Count), &ts, metadata, createZstdWriter)
|
||||
}
|
||||
|
||||
type zstdChunkedWriter struct {
|
||||
|
|
@ -454,7 +446,7 @@ func (w zstdChunkedWriter) Write(p []byte) (int, error) {
|
|||
}
|
||||
}
|
||||
|
||||
// zstdChunkedWriterWithLevel writes a zstd compressed tarball where each file is
|
||||
// makeZstdChunkedWriter writes a zstd compressed tarball where each file is
|
||||
// compressed separately so it can be addressed separately. Idea based on CRFS:
|
||||
// https://github.com/google/crfs
|
||||
// The difference with CRFS is that the zstd compression is used instead of gzip.
|
||||
|
|
@ -469,12 +461,12 @@ func (w zstdChunkedWriter) Write(p []byte) (int, error) {
|
|||
// [SKIPPABLE FRAME 1]: [ZSTD SKIPPABLE FRAME, SIZE=MANIFEST LENGTH][MANIFEST]
|
||||
// [SKIPPABLE FRAME 2]: [ZSTD SKIPPABLE FRAME, SIZE=16][MANIFEST_OFFSET][MANIFEST_LENGTH][MANIFEST_LENGTH_UNCOMPRESSED][MANIFEST_TYPE][CHUNKED_ZSTD_MAGIC_NUMBER]
|
||||
// MANIFEST_OFFSET, MANIFEST_LENGTH, MANIFEST_LENGTH_UNCOMPRESSED and CHUNKED_ZSTD_MAGIC_NUMBER are 64 bits unsigned in little endian format.
|
||||
func zstdChunkedWriterWithLevel(out io.Writer, metadata map[string]string, level int) (io.WriteCloser, error) {
|
||||
func makeZstdChunkedWriter(out io.Writer, metadata map[string]string, createZstdWriter minimal.CreateZstdWriterFunc) (io.WriteCloser, error) {
|
||||
ch := make(chan error, 1)
|
||||
r, w := io.Pipe()
|
||||
|
||||
go func() {
|
||||
ch <- writeZstdChunkedStream(out, metadata, r, level)
|
||||
ch <- writeZstdChunkedStream(out, metadata, r, createZstdWriter)
|
||||
_, _ = io.Copy(io.Discard, r) // Ordinarily writeZstdChunkedStream consumes all of r. If it fails, ensure the write end never blocks and eventually terminates.
|
||||
r.Close()
|
||||
close(ch)
|
||||
|
|
@ -493,5 +485,40 @@ func ZstdCompressor(r io.Writer, metadata map[string]string, level *int) (io.Wri
|
|||
level = &l
|
||||
}
|
||||
|
||||
return zstdChunkedWriterWithLevel(r, metadata, *level)
|
||||
createZstdWriter := func(dest io.Writer) (minimal.ZstdWriter, error) {
|
||||
return minimal.ZstdWriterWithLevel(dest, *level)
|
||||
}
|
||||
|
||||
return makeZstdChunkedWriter(r, metadata, createZstdWriter)
|
||||
}
|
||||
|
||||
type noCompression struct {
|
||||
dest io.Writer
|
||||
}
|
||||
|
||||
func (n *noCompression) Write(p []byte) (int, error) {
|
||||
return n.dest.Write(p)
|
||||
}
|
||||
|
||||
func (n *noCompression) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *noCompression) Flush() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *noCompression) Reset(dest io.Writer) {
|
||||
n.dest = dest
|
||||
}
|
||||
|
||||
// NoCompression writes directly to the output file without any compression
|
||||
//
|
||||
// Such an output does not follow the zstd:chunked spec and cannot be generally consumed; this function
|
||||
// only exists for internal purposes and should not be called from outside c/storage.
|
||||
func NoCompression(r io.Writer, metadata map[string]string) (io.WriteCloser, error) {
|
||||
createZstdWriter := func(dest io.Writer) (minimal.ZstdWriter, error) {
|
||||
return &noCompression{dest: dest}, nil
|
||||
}
|
||||
return makeZstdChunkedWriter(r, metadata, createZstdWriter)
|
||||
}
|
||||
|
|
|
|||
15
vendor/github.com/containers/storage/pkg/chunked/internal/minimal/compression.go
generated
vendored
15
vendor/github.com/containers/storage/pkg/chunked/internal/minimal/compression.go
generated
vendored
|
|
@ -20,6 +20,15 @@ import (
|
|||
"github.com/vbatts/tar-split/archive/tar"
|
||||
)
|
||||
|
||||
// ZstdWriter is an interface that wraps standard io.WriteCloser and Reset() to reuse the compressor with a new writer.
|
||||
type ZstdWriter interface {
|
||||
io.WriteCloser
|
||||
Reset(dest io.Writer)
|
||||
}
|
||||
|
||||
// CreateZstdWriterFunc is a function that creates a ZstdWriter for the provided destination writer.
|
||||
type CreateZstdWriterFunc func(dest io.Writer) (ZstdWriter, error)
|
||||
|
||||
// TOC is short for Table of Contents and is used by the zstd:chunked
|
||||
// file format to effectively add an overall index into the contents
|
||||
// of a tarball; it also includes file metadata.
|
||||
|
|
@ -179,7 +188,7 @@ type TarSplitData struct {
|
|||
UncompressedSize int64
|
||||
}
|
||||
|
||||
func WriteZstdChunkedManifest(dest io.Writer, outMetadata map[string]string, offset uint64, tarSplitData *TarSplitData, metadata []FileMetadata, level int) error {
|
||||
func WriteZstdChunkedManifest(dest io.Writer, outMetadata map[string]string, offset uint64, tarSplitData *TarSplitData, metadata []FileMetadata, createZstdWriter CreateZstdWriterFunc) error {
|
||||
// 8 is the size of the zstd skippable frame header + the frame size
|
||||
const zstdSkippableFrameHeader = 8
|
||||
manifestOffset := offset + zstdSkippableFrameHeader
|
||||
|
|
@ -198,7 +207,7 @@ func WriteZstdChunkedManifest(dest io.Writer, outMetadata map[string]string, off
|
|||
}
|
||||
|
||||
var compressedBuffer bytes.Buffer
|
||||
zstdWriter, err := ZstdWriterWithLevel(&compressedBuffer, level)
|
||||
zstdWriter, err := createZstdWriter(&compressedBuffer)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -244,7 +253,7 @@ func WriteZstdChunkedManifest(dest io.Writer, outMetadata map[string]string, off
|
|||
return appendZstdSkippableFrame(dest, manifestDataLE)
|
||||
}
|
||||
|
||||
func ZstdWriterWithLevel(dest io.Writer, level int) (*zstd.Encoder, error) {
|
||||
func ZstdWriterWithLevel(dest io.Writer, level int) (ZstdWriter, error) {
|
||||
el := zstd.EncoderLevelFromZstd(level)
|
||||
return zstd.NewWriter(dest, zstd.WithEncoderLevel(el))
|
||||
}
|
||||
|
|
|
|||
95
vendor/github.com/containers/storage/pkg/chunked/storage_linux.go
generated
vendored
95
vendor/github.com/containers/storage/pkg/chunked/storage_linux.go
generated
vendored
|
|
@ -2,7 +2,6 @@ package chunked
|
|||
|
||||
import (
|
||||
archivetar "archive/tar"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
|
|
@ -81,7 +80,7 @@ type chunkedDiffer struct {
|
|||
convertToZstdChunked bool
|
||||
|
||||
// Chunked metadata
|
||||
// This is usually set in GetDiffer, but if convertToZstdChunked, it is only computed in chunkedDiffer.ApplyDiff
|
||||
// This is usually set in NewDiffer, but if convertToZstdChunked, it is only computed in chunkedDiffer.ApplyDiff
|
||||
// ==========
|
||||
// tocDigest is the digest of the TOC document when the layer
|
||||
// is partially pulled, or "" if not relevant to consumers.
|
||||
|
|
@ -89,14 +88,14 @@ type chunkedDiffer struct {
|
|||
tocOffset int64
|
||||
manifest []byte
|
||||
toc *minimal.TOC // The parsed contents of manifest, or nil if not yet available
|
||||
tarSplit []byte
|
||||
tarSplit *os.File
|
||||
uncompressedTarSize int64 // -1 if unknown
|
||||
// skipValidation is set to true if the individual files in
|
||||
// the layer are trusted and should not be validated.
|
||||
skipValidation bool
|
||||
|
||||
// Long-term caches
|
||||
// This is set in GetDiffer, when the caller must not hold any storage locks, and later consumed in .ApplyDiff()
|
||||
// This is set in NewDiffer, when the caller must not hold any storage locks, and later consumed in .ApplyDiff()
|
||||
// ==========
|
||||
layersCache *layersCache
|
||||
copyBuffer []byte
|
||||
|
|
@ -109,6 +108,7 @@ type chunkedDiffer struct {
|
|||
zstdReader *zstd.Decoder
|
||||
rawReader io.Reader
|
||||
useFsVerity graphdriver.DifferFsVerity
|
||||
used bool // the differ object was already used and cannot be used again for .ApplyDiff
|
||||
}
|
||||
|
||||
var xattrsToIgnore = map[string]any{
|
||||
|
|
@ -164,16 +164,13 @@ func (c *chunkedDiffer) convertTarToZstdChunked(destDirectory string, payload *o
|
|||
|
||||
defer diff.Close()
|
||||
|
||||
fd, err := unix.Open(destDirectory, unix.O_TMPFILE|unix.O_RDWR|unix.O_CLOEXEC, 0o600)
|
||||
f, err := openTmpFile(destDirectory)
|
||||
if err != nil {
|
||||
return 0, nil, "", nil, &fs.PathError{Op: "open", Path: destDirectory, Err: err}
|
||||
return 0, nil, "", nil, err
|
||||
}
|
||||
|
||||
f := os.NewFile(uintptr(fd), destDirectory)
|
||||
|
||||
newAnnotations := make(map[string]string)
|
||||
level := 1
|
||||
chunked, err := compressor.ZstdCompressor(f, newAnnotations, &level)
|
||||
chunked, err := compressor.NoCompression(f, newAnnotations)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return 0, nil, "", nil, err
|
||||
|
|
@ -193,10 +190,20 @@ func (c *chunkedDiffer) convertTarToZstdChunked(destDirectory string, payload *o
|
|||
return copied, newSeekableFile(f), convertedOutputDigester.Digest(), newAnnotations, nil
|
||||
}
|
||||
|
||||
// GetDiffer returns a differ than can be used with ApplyDiffWithDiffer.
|
||||
func (c *chunkedDiffer) Close() error {
|
||||
if c.tarSplit != nil {
|
||||
err := c.tarSplit.Close()
|
||||
c.tarSplit = nil
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewDiffer returns a differ than can be used with [Store.PrepareStagedLayer].
|
||||
// If it returns an error that matches ErrFallbackToOrdinaryLayerDownload, the caller can
|
||||
// retry the operation with a different method.
|
||||
func GetDiffer(ctx context.Context, store storage.Store, blobDigest digest.Digest, blobSize int64, annotations map[string]string, iss ImageSourceSeekable) (graphdriver.Differ, error) {
|
||||
// The caller must call Close() on the returned Differ.
|
||||
func NewDiffer(ctx context.Context, store storage.Store, blobDigest digest.Digest, blobSize int64, annotations map[string]string, iss ImageSourceSeekable) (graphdriver.Differ, error) {
|
||||
pullOptions := parsePullOptions(store)
|
||||
|
||||
if !pullOptions.enablePartialImages {
|
||||
|
|
@ -259,7 +266,7 @@ func (e errFallbackCanConvert) Unwrap() error {
|
|||
return e.err
|
||||
}
|
||||
|
||||
// getProperDiffer is an implementation detail of GetDiffer.
|
||||
// getProperDiffer is an implementation detail of NewDiffer.
|
||||
// It returns a “proper” differ (not a convert_images one) if possible.
|
||||
// May return an error matching ErrFallbackToOrdinaryLayerDownload if a fallback to an alternative
|
||||
// (either makeConvertFromRawDiffer, or a non-partial pull) is permissible.
|
||||
|
|
@ -332,14 +339,22 @@ func makeConvertFromRawDiffer(store storage.Store, blobDigest digest.Digest, blo
|
|||
|
||||
// makeZstdChunkedDiffer sets up a chunkedDiffer for a zstd:chunked layer.
|
||||
// It may return an error matching ErrFallbackToOrdinaryLayerDownload / errFallbackCanConvert.
|
||||
func makeZstdChunkedDiffer(store storage.Store, blobSize int64, tocDigest digest.Digest, annotations map[string]string, iss ImageSourceSeekable, pullOptions pullOptions) (*chunkedDiffer, error) {
|
||||
manifest, toc, tarSplit, tocOffset, err := readZstdChunkedManifest(iss, tocDigest, annotations)
|
||||
func makeZstdChunkedDiffer(store storage.Store, blobSize int64, tocDigest digest.Digest, annotations map[string]string, iss ImageSourceSeekable, pullOptions pullOptions) (_ *chunkedDiffer, retErr error) {
|
||||
manifest, toc, tarSplit, tocOffset, err := readZstdChunkedManifest(store.RunRoot(), iss, tocDigest, annotations, true)
|
||||
if err != nil { // May be ErrFallbackToOrdinaryLayerDownload / errFallbackCanConvert
|
||||
return nil, fmt.Errorf("read zstd:chunked manifest: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if tarSplit != nil && retErr != nil {
|
||||
tarSplit.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
var uncompressedTarSize int64 = -1
|
||||
if tarSplit != nil {
|
||||
if _, err := tarSplit.Seek(0, io.SeekStart); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
uncompressedTarSize, err = tarSizeFromTarSplit(tarSplit)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("computing size from tar-split: %w", err)
|
||||
|
|
@ -643,27 +658,24 @@ func (o *originFile) OpenFile() (io.ReadCloser, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if _, err := srcFile.Seek(o.Offset, 0); err != nil {
|
||||
if _, err := srcFile.Seek(o.Offset, io.SeekStart); err != nil {
|
||||
srcFile.Close()
|
||||
return nil, err
|
||||
}
|
||||
return srcFile, nil
|
||||
}
|
||||
|
||||
func (c *chunkedDiffer) prepareCompressedStreamToFile(partCompression compressedFileType, from io.Reader, mf *missingFileChunk) (compressedFileType, error) {
|
||||
func (c *chunkedDiffer) prepareCompressedStreamToFile(partCompression compressedFileType, mf *missingFileChunk) (compressedFileType, error) {
|
||||
switch {
|
||||
case partCompression == fileTypeHole:
|
||||
// The entire part is a hole. Do not need to read from a file.
|
||||
c.rawReader = nil
|
||||
return fileTypeHole, nil
|
||||
case mf.Hole:
|
||||
// Only the missing chunk in the requested part refers to a hole.
|
||||
// The received data must be discarded.
|
||||
limitReader := io.LimitReader(from, mf.CompressedSize)
|
||||
_, err := io.CopyBuffer(io.Discard, limitReader, c.copyBuffer)
|
||||
_, err := io.CopyBuffer(io.Discard, c.rawReader, c.copyBuffer)
|
||||
return fileTypeHole, err
|
||||
case partCompression == fileTypeZstdChunked:
|
||||
c.rawReader = io.LimitReader(from, mf.CompressedSize)
|
||||
if c.zstdReader == nil {
|
||||
r, err := zstd.NewReader(c.rawReader)
|
||||
if err != nil {
|
||||
|
|
@ -676,7 +688,6 @@ func (c *chunkedDiffer) prepareCompressedStreamToFile(partCompression compressed
|
|||
}
|
||||
}
|
||||
case partCompression == fileTypeEstargz:
|
||||
c.rawReader = io.LimitReader(from, mf.CompressedSize)
|
||||
if c.gzipReader == nil {
|
||||
r, err := pgzip.NewReader(c.rawReader)
|
||||
if err != nil {
|
||||
|
|
@ -689,7 +700,7 @@ func (c *chunkedDiffer) prepareCompressedStreamToFile(partCompression compressed
|
|||
}
|
||||
}
|
||||
case partCompression == fileTypeNoCompression:
|
||||
c.rawReader = io.LimitReader(from, mf.UncompressedSize)
|
||||
return fileTypeNoCompression, nil
|
||||
default:
|
||||
return partCompression, fmt.Errorf("unknown file type %q", c.fileType)
|
||||
}
|
||||
|
|
@ -889,6 +900,7 @@ func (c *chunkedDiffer) storeMissingFiles(streams chan io.ReadCloser, errs chan
|
|||
for _, missingPart := range missingParts {
|
||||
var part io.ReadCloser
|
||||
partCompression := c.fileType
|
||||
readingFromLocalFile := false
|
||||
switch {
|
||||
case missingPart.Hole:
|
||||
partCompression = fileTypeHole
|
||||
|
|
@ -899,6 +911,7 @@ func (c *chunkedDiffer) storeMissingFiles(streams chan io.ReadCloser, errs chan
|
|||
return err
|
||||
}
|
||||
partCompression = fileTypeNoCompression
|
||||
readingFromLocalFile = true
|
||||
case missingPart.SourceChunk != nil:
|
||||
select {
|
||||
case p := <-streams:
|
||||
|
|
@ -932,7 +945,18 @@ func (c *chunkedDiffer) storeMissingFiles(streams chan io.ReadCloser, errs chan
|
|||
goto exit
|
||||
}
|
||||
|
||||
compression, err := c.prepareCompressedStreamToFile(partCompression, part, &mf)
|
||||
c.rawReader = nil
|
||||
if part != nil {
|
||||
limit := mf.CompressedSize
|
||||
// If we are reading from a source file, use the uncompressed size to limit the reader, because
|
||||
// the compressed size refers to the original layer stream.
|
||||
if readingFromLocalFile {
|
||||
limit = mf.UncompressedSize
|
||||
}
|
||||
c.rawReader = io.LimitReader(part, limit)
|
||||
}
|
||||
|
||||
compression, err := c.prepareCompressedStreamToFile(partCompression, &mf)
|
||||
if err != nil {
|
||||
Err = err
|
||||
goto exit
|
||||
|
|
@ -1374,6 +1398,11 @@ func typeToOsMode(typ string) (os.FileMode, error) {
|
|||
}
|
||||
|
||||
func (c *chunkedDiffer) ApplyDiff(dest string, options *archive.TarOptions, differOpts *graphdriver.DifferOptions) (graphdriver.DriverWithDifferOutput, error) {
|
||||
if c.used {
|
||||
return graphdriver.DriverWithDifferOutput{}, fmt.Errorf("internal error: chunked differ already used")
|
||||
}
|
||||
c.used = true
|
||||
|
||||
defer c.layersCache.release()
|
||||
defer func() {
|
||||
if c.zstdReader != nil {
|
||||
|
|
@ -1419,7 +1448,9 @@ func (c *chunkedDiffer) ApplyDiff(dest string, options *archive.TarOptions, diff
|
|||
if err != nil {
|
||||
return graphdriver.DriverWithDifferOutput{}, err
|
||||
}
|
||||
|
||||
c.uncompressedTarSize = tarSize
|
||||
|
||||
// fileSource is a O_TMPFILE file descriptor, so we
|
||||
// need to keep it open until the entire file is processed.
|
||||
defer fileSource.Close()
|
||||
|
|
@ -1435,7 +1466,7 @@ func (c *chunkedDiffer) ApplyDiff(dest string, options *archive.TarOptions, diff
|
|||
if tocDigest == nil {
|
||||
return graphdriver.DriverWithDifferOutput{}, fmt.Errorf("internal error: just-created zstd:chunked missing TOC digest")
|
||||
}
|
||||
manifest, toc, tarSplit, tocOffset, err := readZstdChunkedManifest(fileSource, *tocDigest, annotations)
|
||||
manifest, toc, tarSplit, tocOffset, err := readZstdChunkedManifest(dest, fileSource, *tocDigest, annotations, false)
|
||||
if err != nil {
|
||||
return graphdriver.DriverWithDifferOutput{}, fmt.Errorf("read zstd:chunked manifest: %w", err)
|
||||
}
|
||||
|
|
@ -1444,7 +1475,7 @@ func (c *chunkedDiffer) ApplyDiff(dest string, options *archive.TarOptions, diff
|
|||
stream = fileSource
|
||||
|
||||
// fill the chunkedDiffer with the data we just read.
|
||||
c.fileType = fileTypeZstdChunked
|
||||
c.fileType = fileTypeNoCompression
|
||||
c.manifest = manifest
|
||||
c.toc = toc
|
||||
c.tarSplit = tarSplit
|
||||
|
|
@ -1842,7 +1873,10 @@ func (c *chunkedDiffer) ApplyDiff(dest string, options *archive.TarOptions, diff
|
|||
case c.pullOptions.insecureAllowUnpredictableImageContents:
|
||||
// Oh well. Skip the costly digest computation.
|
||||
case output.TarSplit != nil:
|
||||
metadata := tsStorage.NewJSONUnpacker(bytes.NewReader(output.TarSplit))
|
||||
if _, err := output.TarSplit.Seek(0, io.SeekStart); err != nil {
|
||||
return output, err
|
||||
}
|
||||
metadata := tsStorage.NewJSONUnpacker(output.TarSplit)
|
||||
fg := newStagedFileGetter(dirFile, flatPathNameMap)
|
||||
digester := digest.Canonical.Digester()
|
||||
if err := asm.WriteOutputTarStream(fg, metadata, digester.Hash()); err != nil {
|
||||
|
|
@ -1850,7 +1884,7 @@ func (c *chunkedDiffer) ApplyDiff(dest string, options *archive.TarOptions, diff
|
|||
}
|
||||
output.UncompressedDigest = digester.Digest()
|
||||
default:
|
||||
// We are checking for this earlier in GetDiffer, so this should not be reachable.
|
||||
// We are checking for this earlier in NewDiffer, so this should not be reachable.
|
||||
return output, fmt.Errorf(`internal error: layer's UncompressedDigest is unknown and "insecure_allow_unpredictable_image_contents" is not set`)
|
||||
}
|
||||
}
|
||||
|
|
@ -1861,6 +1895,9 @@ func (c *chunkedDiffer) ApplyDiff(dest string, options *archive.TarOptions, diff
|
|||
|
||||
output.Artifacts[fsVerityDigestsKey] = c.fsVerityDigests
|
||||
|
||||
// on success steal the reference to the tarSplit file
|
||||
c.tarSplit = nil
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
||||
|
|
@ -1962,7 +1999,7 @@ func validateChunkChecksum(chunk *minimal.FileMetadata, root, path string, offse
|
|||
}
|
||||
defer fd.Close()
|
||||
|
||||
if _, err := unix.Seek(int(fd.Fd()), offset, 0); err != nil {
|
||||
if _, err := fd.Seek(offset, io.SeekStart); err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
|
|
|
|||
5
vendor/github.com/containers/storage/pkg/chunked/storage_unsupported.go
generated
vendored
5
vendor/github.com/containers/storage/pkg/chunked/storage_unsupported.go
generated
vendored
|
|
@ -11,7 +11,8 @@ import (
|
|||
digest "github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
// GetDiffer returns a differ than can be used with ApplyDiffWithDiffer.
|
||||
func GetDiffer(ctx context.Context, store storage.Store, blobDigest digest.Digest, blobSize int64, annotations map[string]string, iss ImageSourceSeekable) (graphdriver.Differ, error) {
|
||||
// NewDiffer returns a differ than can be used with [Store.PrepareStagedLayer].
|
||||
// The caller must call Close() on the returned Differ.
|
||||
func NewDiffer(ctx context.Context, store storage.Store, blobDigest digest.Digest, blobSize int64, annotations map[string]string, iss ImageSourceSeekable) (graphdriver.Differ, error) {
|
||||
return nil, newErrFallbackToOrdinaryLayerDownload(errors.New("format not supported on this system"))
|
||||
}
|
||||
|
|
|
|||
97
vendor/github.com/containers/storage/pkg/lockfile/lockfile.go
generated
vendored
97
vendor/github.com/containers/storage/pkg/lockfile/lockfile.go
generated
vendored
|
|
@ -6,6 +6,8 @@ import (
|
|||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/containers/storage/internal/rawfilelock"
|
||||
)
|
||||
|
||||
// A Locker represents a file lock where the file is used to cache an
|
||||
|
|
@ -55,13 +57,6 @@ type Locker interface {
|
|||
AssertLockedForWriting()
|
||||
}
|
||||
|
||||
type lockType byte
|
||||
|
||||
const (
|
||||
readLock lockType = iota
|
||||
writeLock
|
||||
)
|
||||
|
||||
// LockFile represents a file lock where the file is used to cache an
|
||||
// identifier of the last party that made changes to whatever's being protected
|
||||
// by the lock.
|
||||
|
|
@ -79,12 +74,12 @@ type LockFile struct {
|
|||
stateMutex *sync.Mutex
|
||||
counter int64
|
||||
lw LastWrite // A global value valid as of the last .Touch() or .Modified()
|
||||
lockType lockType
|
||||
lockType rawfilelock.LockType
|
||||
locked bool
|
||||
// The following fields are only modified on transitions between counter == 0 / counter != 0.
|
||||
// Thus, they can be safely accessed by users _that currently hold the LockFile_ without locking.
|
||||
// In other cases, they need to be protected using stateMutex.
|
||||
fd fileHandle
|
||||
fd rawfilelock.FileHandle
|
||||
}
|
||||
|
||||
var (
|
||||
|
|
@ -129,12 +124,12 @@ func (l *LockFile) Lock() {
|
|||
if l.ro {
|
||||
panic("can't take write lock on read-only lock file")
|
||||
}
|
||||
l.lock(writeLock)
|
||||
l.lock(rawfilelock.WriteLock)
|
||||
}
|
||||
|
||||
// RLock locks the lockfile as a reader.
|
||||
func (l *LockFile) RLock() {
|
||||
l.lock(readLock)
|
||||
l.lock(rawfilelock.ReadLock)
|
||||
}
|
||||
|
||||
// TryLock attempts to lock the lockfile as a writer. Panic if the lock is a read-only one.
|
||||
|
|
@ -142,12 +137,12 @@ func (l *LockFile) TryLock() error {
|
|||
if l.ro {
|
||||
panic("can't take write lock on read-only lock file")
|
||||
}
|
||||
return l.tryLock(writeLock)
|
||||
return l.tryLock(rawfilelock.WriteLock)
|
||||
}
|
||||
|
||||
// TryRLock attempts to lock the lockfile as a reader.
|
||||
func (l *LockFile) TryRLock() error {
|
||||
return l.tryLock(readLock)
|
||||
return l.tryLock(rawfilelock.ReadLock)
|
||||
}
|
||||
|
||||
// Unlock unlocks the lockfile.
|
||||
|
|
@ -172,9 +167,9 @@ func (l *LockFile) Unlock() {
|
|||
l.locked = false
|
||||
// Close the file descriptor on the last unlock, releasing the
|
||||
// file lock.
|
||||
unlockAndCloseHandle(l.fd)
|
||||
rawfilelock.UnlockAndCloseHandle(l.fd)
|
||||
}
|
||||
if l.lockType == readLock {
|
||||
if l.lockType == rawfilelock.ReadLock {
|
||||
l.rwMutex.RUnlock()
|
||||
} else {
|
||||
l.rwMutex.Unlock()
|
||||
|
|
@ -206,7 +201,7 @@ func (l *LockFile) AssertLockedForWriting() {
|
|||
|
||||
l.AssertLocked()
|
||||
// Like AssertLocked, don’t even bother with l.stateMutex.
|
||||
if l.lockType == readLock {
|
||||
if l.lockType == rawfilelock.ReadLock {
|
||||
panic("internal error: lock is not held for writing")
|
||||
}
|
||||
}
|
||||
|
|
@ -273,7 +268,7 @@ func (l *LockFile) Touch() error {
|
|||
return err
|
||||
}
|
||||
l.stateMutex.Lock()
|
||||
if !l.locked || (l.lockType == readLock) {
|
||||
if !l.locked || (l.lockType == rawfilelock.ReadLock) {
|
||||
panic("attempted to update last-writer in lockfile without the write lock")
|
||||
}
|
||||
defer l.stateMutex.Unlock()
|
||||
|
|
@ -324,6 +319,24 @@ func getLockfile(path string, ro bool) (*LockFile, error) {
|
|||
return lockFile, nil
|
||||
}
|
||||
|
||||
// openLock opens a lock file at the specified path, creating the parent directory if it does not exist.
|
||||
func openLock(path string, readOnly bool) (rawfilelock.FileHandle, error) {
|
||||
fd, err := rawfilelock.OpenLock(path, readOnly)
|
||||
if err == nil {
|
||||
return fd, nil
|
||||
}
|
||||
|
||||
// the directory of the lockfile seems to be removed, try to create it
|
||||
if os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0o700); err != nil {
|
||||
return fd, fmt.Errorf("creating lock file directory: %w", err)
|
||||
}
|
||||
|
||||
return openLock(path, readOnly)
|
||||
}
|
||||
return fd, &os.PathError{Op: "open", Path: path, Err: err}
|
||||
}
|
||||
|
||||
// createLockFileForPath returns new *LockFile object, possibly (depending on the platform)
|
||||
// working inter-process and associated with the specified path.
|
||||
//
|
||||
|
|
@ -343,11 +356,11 @@ func createLockFileForPath(path string, ro bool) (*LockFile, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
unlockAndCloseHandle(fd)
|
||||
rawfilelock.UnlockAndCloseHandle(fd)
|
||||
|
||||
lType := writeLock
|
||||
lType := rawfilelock.WriteLock
|
||||
if ro {
|
||||
lType = readLock
|
||||
lType = rawfilelock.ReadLock
|
||||
}
|
||||
|
||||
return &LockFile{
|
||||
|
|
@ -362,40 +375,10 @@ func createLockFileForPath(path string, ro bool) (*LockFile, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
// openLock opens the file at path and returns the corresponding file
|
||||
// descriptor. The path is opened either read-only or read-write,
|
||||
// depending on the value of ro argument.
|
||||
//
|
||||
// openLock will create the file and its parent directories,
|
||||
// if necessary.
|
||||
func openLock(path string, ro bool) (fd fileHandle, err error) {
|
||||
flags := os.O_CREATE
|
||||
if ro {
|
||||
flags |= os.O_RDONLY
|
||||
} else {
|
||||
flags |= os.O_RDWR
|
||||
}
|
||||
fd, err = openHandle(path, flags)
|
||||
if err == nil {
|
||||
return fd, nil
|
||||
}
|
||||
|
||||
// the directory of the lockfile seems to be removed, try to create it
|
||||
if os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0o700); err != nil {
|
||||
return fd, fmt.Errorf("creating lock file directory: %w", err)
|
||||
}
|
||||
|
||||
return openLock(path, ro)
|
||||
}
|
||||
|
||||
return fd, &os.PathError{Op: "open", Path: path, Err: err}
|
||||
}
|
||||
|
||||
// lock locks the lockfile via syscall based on the specified type and
|
||||
// command.
|
||||
func (l *LockFile) lock(lType lockType) {
|
||||
if lType == readLock {
|
||||
func (l *LockFile) lock(lType rawfilelock.LockType) {
|
||||
if lType == rawfilelock.ReadLock {
|
||||
l.rwMutex.RLock()
|
||||
} else {
|
||||
l.rwMutex.Lock()
|
||||
|
|
@ -413,7 +396,7 @@ func (l *LockFile) lock(lType lockType) {
|
|||
// Optimization: only use the (expensive) syscall when
|
||||
// the counter is 0. In this case, we're either the first
|
||||
// reader lock or a writer lock.
|
||||
if err := lockHandle(l.fd, lType, false); err != nil {
|
||||
if err := rawfilelock.LockFile(l.fd, lType); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
|
@ -424,10 +407,10 @@ func (l *LockFile) lock(lType lockType) {
|
|||
|
||||
// lock locks the lockfile via syscall based on the specified type and
|
||||
// command.
|
||||
func (l *LockFile) tryLock(lType lockType) error {
|
||||
func (l *LockFile) tryLock(lType rawfilelock.LockType) error {
|
||||
var success bool
|
||||
var rwMutexUnlocker func()
|
||||
if lType == readLock {
|
||||
if lType == rawfilelock.ReadLock {
|
||||
success = l.rwMutex.TryRLock()
|
||||
rwMutexUnlocker = l.rwMutex.RUnlock
|
||||
} else {
|
||||
|
|
@ -451,8 +434,8 @@ func (l *LockFile) tryLock(lType lockType) error {
|
|||
// Optimization: only use the (expensive) syscall when
|
||||
// the counter is 0. In this case, we're either the first
|
||||
// reader lock or a writer lock.
|
||||
if err = lockHandle(l.fd, lType, true); err != nil {
|
||||
closeHandle(fd)
|
||||
if err = rawfilelock.TryLockFile(l.fd, lType); err != nil {
|
||||
rawfilelock.CloseHandle(fd)
|
||||
rwMutexUnlocker()
|
||||
return err
|
||||
}
|
||||
|
|
|
|||
40
vendor/github.com/containers/storage/pkg/lockfile/lockfile_unix.go
generated
vendored
40
vendor/github.com/containers/storage/pkg/lockfile/lockfile_unix.go
generated
vendored
|
|
@ -9,8 +9,6 @@ import (
|
|||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
type fileHandle uintptr
|
||||
|
||||
// GetLastWrite returns a LastWrite value corresponding to current state of the lock.
|
||||
// This is typically called before (_not after_) loading the state when initializing a consumer
|
||||
// of the data protected by the lock.
|
||||
|
|
@ -66,41 +64,3 @@ func (l *LockFile) TouchedSince(when time.Time) bool {
|
|||
touched := time.Unix(mtim.Unix())
|
||||
return when.Before(touched)
|
||||
}
|
||||
|
||||
func openHandle(path string, mode int) (fileHandle, error) {
|
||||
mode |= unix.O_CLOEXEC
|
||||
fd, err := unix.Open(path, mode, 0o644)
|
||||
return fileHandle(fd), err
|
||||
}
|
||||
|
||||
func lockHandle(fd fileHandle, lType lockType, nonblocking bool) error {
|
||||
fType := unix.F_RDLCK
|
||||
if lType != readLock {
|
||||
fType = unix.F_WRLCK
|
||||
}
|
||||
lk := unix.Flock_t{
|
||||
Type: int16(fType),
|
||||
Whence: int16(unix.SEEK_SET),
|
||||
Start: 0,
|
||||
Len: 0,
|
||||
}
|
||||
cmd := unix.F_SETLKW
|
||||
if nonblocking {
|
||||
cmd = unix.F_SETLK
|
||||
}
|
||||
for {
|
||||
err := unix.FcntlFlock(uintptr(fd), cmd, &lk)
|
||||
if err == nil || nonblocking {
|
||||
return err
|
||||
}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func unlockAndCloseHandle(fd fileHandle) {
|
||||
unix.Close(int(fd))
|
||||
}
|
||||
|
||||
func closeHandle(fd fileHandle) {
|
||||
unix.Close(int(fd))
|
||||
}
|
||||
|
|
|
|||
36
vendor/github.com/containers/storage/pkg/lockfile/lockfile_windows.go
generated
vendored
36
vendor/github.com/containers/storage/pkg/lockfile/lockfile_windows.go
generated
vendored
|
|
@ -14,8 +14,6 @@ const (
|
|||
allBytes = ^uint32(0)
|
||||
)
|
||||
|
||||
type fileHandle windows.Handle
|
||||
|
||||
// GetLastWrite returns a LastWrite value corresponding to current state of the lock.
|
||||
// This is typically called before (_not after_) loading the state when initializing a consumer
|
||||
// of the data protected by the lock.
|
||||
|
|
@ -73,37 +71,3 @@ func (l *LockFile) TouchedSince(when time.Time) bool {
|
|||
}
|
||||
return when.Before(stat.ModTime())
|
||||
}
|
||||
|
||||
func openHandle(path string, mode int) (fileHandle, error) {
|
||||
mode |= windows.O_CLOEXEC
|
||||
fd, err := windows.Open(path, mode, windows.S_IWRITE)
|
||||
return fileHandle(fd), err
|
||||
}
|
||||
|
||||
func lockHandle(fd fileHandle, lType lockType, nonblocking bool) error {
|
||||
flags := 0
|
||||
if lType != readLock {
|
||||
flags = windows.LOCKFILE_EXCLUSIVE_LOCK
|
||||
}
|
||||
if nonblocking {
|
||||
flags |= windows.LOCKFILE_FAIL_IMMEDIATELY
|
||||
}
|
||||
ol := new(windows.Overlapped)
|
||||
if err := windows.LockFileEx(windows.Handle(fd), uint32(flags), reserved, allBytes, allBytes, ol); err != nil {
|
||||
if nonblocking {
|
||||
return err
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func unlockAndCloseHandle(fd fileHandle) {
|
||||
ol := new(windows.Overlapped)
|
||||
windows.UnlockFileEx(windows.Handle(fd), reserved, allBytes, allBytes, ol)
|
||||
closeHandle(fd)
|
||||
}
|
||||
|
||||
func closeHandle(fd fileHandle) {
|
||||
windows.Close(windows.Handle(fd))
|
||||
}
|
||||
|
|
|
|||
93
vendor/github.com/containers/storage/store.go
generated
vendored
93
vendor/github.com/containers/storage/store.go
generated
vendored
|
|
@ -22,6 +22,7 @@ import (
|
|||
|
||||
drivers "github.com/containers/storage/drivers"
|
||||
"github.com/containers/storage/internal/dedup"
|
||||
"github.com/containers/storage/internal/tempdir"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/directory"
|
||||
"github.com/containers/storage/pkg/idtools"
|
||||
|
|
@ -362,15 +363,11 @@ type Store interface {
|
|||
// }
|
||||
ApplyDiff(to string, diff io.Reader) (int64, error)
|
||||
|
||||
// ApplyDiffWithDiffer applies a diff to a layer.
|
||||
// It is the caller responsibility to clean the staging directory if it is not
|
||||
// successfully applied with ApplyStagedLayer.
|
||||
// Deprecated: Use PrepareStagedLayer instead. ApplyDiffWithDiffer is going to be removed in a future release
|
||||
ApplyDiffWithDiffer(to string, options *drivers.ApplyDiffWithDifferOpts, differ drivers.Differ) (*drivers.DriverWithDifferOutput, error)
|
||||
|
||||
// PrepareStagedLayer applies a diff to a layer.
|
||||
// It is the caller responsibility to clean the staging directory if it is not
|
||||
// successfully applied with ApplyStagedLayer.
|
||||
// The caller must ensure [Store.ApplyStagedLayer] or [Store.CleanupStagedLayer] is called eventually
|
||||
// with the returned [drivers.DriverWithDifferOutput] object.
|
||||
PrepareStagedLayer(options *drivers.ApplyDiffWithDifferOpts, differ drivers.Differ) (*drivers.DriverWithDifferOutput, error)
|
||||
|
||||
// ApplyStagedLayer combines the functions of creating a layer and using the staging
|
||||
|
|
@ -1449,16 +1446,7 @@ func (s *store) writeToAllStores(fn func(rlstore rwLayerStore) error) error {
|
|||
// On entry:
|
||||
// - rlstore must be locked for writing
|
||||
func (s *store) canUseShifting(uidmap, gidmap []idtools.IDMap) bool {
|
||||
if !s.graphDriver.SupportsShifting() {
|
||||
return false
|
||||
}
|
||||
if uidmap != nil && !idtools.IsContiguous(uidmap) {
|
||||
return false
|
||||
}
|
||||
if gidmap != nil && !idtools.IsContiguous(gidmap) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
return s.graphDriver.SupportsShifting(uidmap, gidmap)
|
||||
}
|
||||
|
||||
// On entry:
|
||||
|
|
@ -1771,7 +1759,7 @@ func (s *store) imageTopLayerForMapping(image *Image, ristore roImageStore, rlst
|
|||
}
|
||||
// By construction, createMappedLayer can only be true if ristore == s.imageStore.
|
||||
if err = s.imageStore.addMappedTopLayer(image.ID, mappedLayer.ID); err != nil {
|
||||
if err2 := rlstore.Delete(mappedLayer.ID); err2 != nil {
|
||||
if err2 := rlstore.deleteWhileHoldingLock(mappedLayer.ID); err2 != nil {
|
||||
err = fmt.Errorf("deleting layer %q: %v: %w", mappedLayer.ID, err2, err)
|
||||
}
|
||||
return nil, fmt.Errorf("registering ID-mapped layer with image %q: %w", image.ID, err)
|
||||
|
|
@ -1956,7 +1944,7 @@ func (s *store) CreateContainer(id string, names []string, image, layer, metadat
|
|||
}
|
||||
container, err := s.containerStore.create(id, names, imageID, layer, &options)
|
||||
if err != nil || container == nil {
|
||||
if err2 := rlstore.Delete(layer); err2 != nil {
|
||||
if err2 := rlstore.deleteWhileHoldingLock(layer); err2 != nil {
|
||||
if err == nil {
|
||||
err = fmt.Errorf("deleting layer %#v: %w", layer, err2)
|
||||
} else {
|
||||
|
|
@ -2553,7 +2541,13 @@ func (s *store) Lookup(name string) (string, error) {
|
|||
return "", ErrLayerUnknown
|
||||
}
|
||||
|
||||
func (s *store) DeleteLayer(id string) error {
|
||||
func (s *store) DeleteLayer(id string) (retErr error) {
|
||||
cleanupFunctions := []tempdir.CleanupTempDirFunc{}
|
||||
defer func() {
|
||||
if cleanupErr := tempdir.CleanupTemporaryDirectories(cleanupFunctions...); cleanupErr != nil {
|
||||
retErr = errors.Join(cleanupErr, retErr)
|
||||
}
|
||||
}()
|
||||
return s.writeToAllStores(func(rlstore rwLayerStore) error {
|
||||
if rlstore.Exists(id) {
|
||||
if l, err := rlstore.Get(id); err != nil {
|
||||
|
|
@ -2587,7 +2581,9 @@ func (s *store) DeleteLayer(id string) error {
|
|||
return fmt.Errorf("layer %v used by container %v: %w", id, container.ID, ErrLayerUsedByContainer)
|
||||
}
|
||||
}
|
||||
if err := rlstore.Delete(id); err != nil {
|
||||
cf, err := rlstore.deferredDelete(id)
|
||||
cleanupFunctions = append(cleanupFunctions, cf...)
|
||||
if err != nil {
|
||||
return fmt.Errorf("delete layer %v: %w", id, err)
|
||||
}
|
||||
|
||||
|
|
@ -2604,8 +2600,14 @@ func (s *store) DeleteLayer(id string) error {
|
|||
})
|
||||
}
|
||||
|
||||
func (s *store) DeleteImage(id string, commit bool) (layers []string, err error) {
|
||||
func (s *store) DeleteImage(id string, commit bool) (layers []string, retErr error) {
|
||||
layersToRemove := []string{}
|
||||
cleanupFunctions := []tempdir.CleanupTempDirFunc{}
|
||||
defer func() {
|
||||
if cleanupErr := tempdir.CleanupTemporaryDirectories(cleanupFunctions...); cleanupErr != nil {
|
||||
retErr = errors.Join(cleanupErr, retErr)
|
||||
}
|
||||
}()
|
||||
if err := s.writeToAllStores(func(rlstore rwLayerStore) error {
|
||||
// Delete image from all available imagestores configured to be used.
|
||||
imageFound := false
|
||||
|
|
@ -2711,7 +2713,9 @@ func (s *store) DeleteImage(id string, commit bool) (layers []string, err error)
|
|||
}
|
||||
if commit {
|
||||
for _, layer := range layersToRemove {
|
||||
if err = rlstore.Delete(layer); err != nil {
|
||||
cf, err := rlstore.deferredDelete(layer)
|
||||
cleanupFunctions = append(cleanupFunctions, cf...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -2723,7 +2727,13 @@ func (s *store) DeleteImage(id string, commit bool) (layers []string, err error)
|
|||
return layersToRemove, nil
|
||||
}
|
||||
|
||||
func (s *store) DeleteContainer(id string) error {
|
||||
func (s *store) DeleteContainer(id string) (retErr error) {
|
||||
cleanupFunctions := []tempdir.CleanupTempDirFunc{}
|
||||
defer func() {
|
||||
if cleanupErr := tempdir.CleanupTemporaryDirectories(cleanupFunctions...); cleanupErr != nil {
|
||||
retErr = errors.Join(cleanupErr, retErr)
|
||||
}
|
||||
}()
|
||||
return s.writeToAllStores(func(rlstore rwLayerStore) error {
|
||||
if !s.containerStore.Exists(id) {
|
||||
return ErrNotAContainer
|
||||
|
|
@ -2739,7 +2749,9 @@ func (s *store) DeleteContainer(id string) error {
|
|||
// the container record that refers to it, effectively losing
|
||||
// track of it
|
||||
if rlstore.Exists(container.LayerID) {
|
||||
if err := rlstore.Delete(container.LayerID); err != nil {
|
||||
cf, err := rlstore.deferredDelete(container.LayerID)
|
||||
cleanupFunctions = append(cleanupFunctions, cf...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -2765,12 +2777,20 @@ func (s *store) DeleteContainer(id string) error {
|
|||
})
|
||||
}
|
||||
|
||||
func (s *store) Delete(id string) error {
|
||||
func (s *store) Delete(id string) (retErr error) {
|
||||
cleanupFunctions := []tempdir.CleanupTempDirFunc{}
|
||||
defer func() {
|
||||
if cleanupErr := tempdir.CleanupTemporaryDirectories(cleanupFunctions...); cleanupErr != nil {
|
||||
retErr = errors.Join(cleanupErr, retErr)
|
||||
}
|
||||
}()
|
||||
return s.writeToAllStores(func(rlstore rwLayerStore) error {
|
||||
if s.containerStore.Exists(id) {
|
||||
if container, err := s.containerStore.Get(id); err == nil {
|
||||
if rlstore.Exists(container.LayerID) {
|
||||
if err = rlstore.Delete(container.LayerID); err != nil {
|
||||
cf, err := rlstore.deferredDelete(container.LayerID)
|
||||
cleanupFunctions = append(cleanupFunctions, cf...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err = s.containerStore.Delete(id); err != nil {
|
||||
|
|
@ -2794,7 +2814,9 @@ func (s *store) Delete(id string) error {
|
|||
return s.imageStore.Delete(id)
|
||||
}
|
||||
if rlstore.Exists(id) {
|
||||
return rlstore.Delete(id)
|
||||
cf, err := rlstore.deferredDelete(id)
|
||||
cleanupFunctions = append(cleanupFunctions, cf...)
|
||||
return err
|
||||
}
|
||||
return ErrLayerUnknown
|
||||
})
|
||||
|
|
@ -3132,6 +3154,12 @@ func (s *store) Diff(from, to string, options *DiffOptions) (io.ReadCloser, erro
|
|||
}
|
||||
|
||||
func (s *store) ApplyStagedLayer(args ApplyStagedLayerOptions) (*Layer, error) {
|
||||
defer func() {
|
||||
if args.DiffOutput.TarSplit != nil {
|
||||
args.DiffOutput.TarSplit.Close()
|
||||
args.DiffOutput.TarSplit = nil
|
||||
}
|
||||
}()
|
||||
rlstore, rlstores, err := s.bothLayerStoreKinds()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
@ -3163,6 +3191,10 @@ func (s *store) ApplyStagedLayer(args ApplyStagedLayerOptions) (*Layer, error) {
|
|||
}
|
||||
|
||||
func (s *store) CleanupStagedLayer(diffOutput *drivers.DriverWithDifferOutput) error {
|
||||
if diffOutput.TarSplit != nil {
|
||||
diffOutput.TarSplit.Close()
|
||||
diffOutput.TarSplit = nil
|
||||
}
|
||||
_, err := writeToLayerStore(s, func(rlstore rwLayerStore) (struct{}, error) {
|
||||
return struct{}{}, rlstore.CleanupStagingDirectory(diffOutput.Target)
|
||||
})
|
||||
|
|
@ -3177,13 +3209,6 @@ func (s *store) PrepareStagedLayer(options *drivers.ApplyDiffWithDifferOpts, dif
|
|||
return rlstore.applyDiffWithDifferNoLock(options, differ)
|
||||
}
|
||||
|
||||
func (s *store) ApplyDiffWithDiffer(to string, options *drivers.ApplyDiffWithDifferOpts, differ drivers.Differ) (*drivers.DriverWithDifferOutput, error) {
|
||||
if to != "" {
|
||||
return nil, fmt.Errorf("ApplyDiffWithDiffer does not support non-empty 'layer' parameter")
|
||||
}
|
||||
return s.PrepareStagedLayer(options, differ)
|
||||
}
|
||||
|
||||
func (s *store) DifferTarget(id string) (string, error) {
|
||||
return writeToLayerStore(s, func(rlstore rwLayerStore) (string, error) {
|
||||
if rlstore.Exists(id) {
|
||||
|
|
|
|||
24
vendor/github.com/containers/storage/types/options.go
generated
vendored
24
vendor/github.com/containers/storage/types/options.go
generated
vendored
|
|
@ -160,19 +160,17 @@ func loadStoreOptionsFromConfFile(storageConf string) (StoreOptions, error) {
|
|||
defaultRootlessGraphRoot = storageOpts.GraphRoot
|
||||
storageOpts = StoreOptions{}
|
||||
reloadConfigurationFileIfNeeded(storageConf, &storageOpts)
|
||||
if usePerUserStorage() {
|
||||
// If the file did not specify a graphroot or runroot,
|
||||
// set sane defaults so we don't try and use root-owned
|
||||
// directories
|
||||
if storageOpts.RunRoot == "" {
|
||||
storageOpts.RunRoot = defaultRootlessRunRoot
|
||||
}
|
||||
if storageOpts.GraphRoot == "" {
|
||||
if storageOpts.RootlessStoragePath != "" {
|
||||
storageOpts.GraphRoot = storageOpts.RootlessStoragePath
|
||||
} else {
|
||||
storageOpts.GraphRoot = defaultRootlessGraphRoot
|
||||
}
|
||||
// If the file did not specify a graphroot or runroot,
|
||||
// set sane defaults so we don't try and use root-owned
|
||||
// directories
|
||||
if storageOpts.RunRoot == "" {
|
||||
storageOpts.RunRoot = defaultRootlessRunRoot
|
||||
}
|
||||
if storageOpts.GraphRoot == "" {
|
||||
if storageOpts.RootlessStoragePath != "" {
|
||||
storageOpts.GraphRoot = storageOpts.RootlessStoragePath
|
||||
} else {
|
||||
storageOpts.GraphRoot = defaultRootlessGraphRoot
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
2
vendor/github.com/containers/storage/userns.go
generated
vendored
2
vendor/github.com/containers/storage/userns.go
generated
vendored
|
|
@ -202,7 +202,7 @@ outer:
|
|||
return 0, err
|
||||
}
|
||||
defer func() {
|
||||
if err2 := rlstore.Delete(clayer.ID); err2 != nil {
|
||||
if err2 := rlstore.deleteWhileHoldingLock(clayer.ID); err2 != nil {
|
||||
if retErr == nil {
|
||||
retErr = fmt.Errorf("deleting temporary layer %#v: %w", clayer.ID, err2)
|
||||
} else {
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue