fixed paths, created ci/cd workflow
Some checks failed
Compile apt-layer / compile (push) Failing after 2s

This commit is contained in:
robojerk 2025-07-14 14:22:06 -07:00
parent 14d7da71e8
commit 29b9675689
26 changed files with 7407 additions and 358 deletions

142
.forgejo/README.md Normal file
View file

@ -0,0 +1,142 @@
# Forgejo Actions for apt-layer
This directory contains Forgejo Actions workflows for automated compilation and testing of the apt-layer tool.
## Workflows
### 1. `compile-apt-layer.yml` - Main Compilation Workflow
**Triggers:**
- Push to `main` or `master` branch (when `src/apt-layer/` files change)
- Pull requests to `main` or `master` branch
- Manual trigger via workflow_dispatch
**What it does:**
1. Sets up Ubuntu environment with required dependencies (jq, dos2unix, bash)
2. Compiles apt-layer using `src/apt-layer/compile.sh`
3. Compiles installer with latest paths.json using `src/apt-layer/compile-installer.sh`
4. Validates both compiled scripts (syntax, functionality)
5. Creates a detailed compilation report
6. Uploads artifacts (compiled scripts and report)
7. **Automatically commits both compiled scripts to the repository** (only on main/master)
**Output:**
- Compiled `apt-layer.sh` in the repository root
- Compiled `install-apt-layer.sh` with embedded paths.json in the repository root
- Compilation report as an artifact
- Both compiled scripts as artifacts
### 2. `test-compile.yml` - Test Compilation Workflow
**Triggers:**
- Manual trigger only (workflow_dispatch)
**What it does:**
1. Tests compilation without affecting the main repository
2. Creates `apt-layer-test.sh` for testing purposes
3. Validates the test compilation
4. Uploads test script as an artifact
**Use case:** Testing compilation changes before pushing to main branch
## How to Use
### Automatic Compilation
1. **Make changes** to files in `src/apt-layer/`
2. **Commit and push** to main/master branch
3. **Forgejo will automatically:**
- Detect the changes
- Run the compilation workflow
- Compile apt-layer
- Commit the compiled script back to the repository
- Provide artifacts for download
### Manual Testing
1. **Go to your Forgejo repository**
2. **Navigate to Actions tab**
3. **Select "Test apt-layer Compilation"**
4. **Click "Run workflow"**
5. **Download the test script** from the artifacts
### Viewing Results
- **Actions tab:** See workflow runs and logs
- **Artifacts:** Download compiled scripts and reports
- **Repository:** The compiled `apt-layer.sh` will be in the root directory
## Dependencies
The workflows automatically install:
- `jq` - JSON processing
- `dos2unix` - Line ending conversion
- `bash` - Script execution
## Configuration
### Path-based Triggers
The main workflow only triggers when these paths change:
- `src/apt-layer/**` - Any apt-layer source files (including config and templates)
- `.forgejo/workflows/compile-apt-layer.yml` - The workflow itself
### Branch Protection
The auto-commit feature only works on:
- `main` branch
- `master` branch
### Artifact Retention
- **Compilation reports:** 30 days
- **Compiled scripts:** 30 days
- **Test scripts:** 7 days
## Troubleshooting
### Common Issues
1. **Compilation fails:**
- Check the workflow logs for specific errors
- Verify all dependencies are available
- Check JSON syntax in config files
2. **Auto-commit doesn't work:**
- Ensure you're on main/master branch
- Check repository permissions for Actions
- Verify the workflow has write access
3. **Test workflow fails:**
- Use the test workflow to debug compilation issues
- Check artifact downloads for the test script
### Manual Compilation
If you need to compile locally:
```bash
# Compile apt-layer
cd src/apt-layer
chmod +x compile.sh
./compile.sh -o ../../apt-layer.sh
# Compile installer with latest paths.json
chmod +x compile-installer.sh
./compile-installer.sh -o ../../install-apt-layer.sh
```
## Benefits
1. **Automated Quality Assurance:** Every commit is automatically compiled and tested
2. **Consistent Builds:** Same environment and dependencies every time
3. **Artifact Management:** Easy access to compiled scripts and reports
4. **Version Control:** Compiled scripts are tracked in git
5. **Testing:** Separate test workflow for safe experimentation
## Security
- Workflows run in isolated Ubuntu environments
- No sensitive data is exposed in logs
- Compiled scripts are validated before commit
- Artifacts have limited retention periods

View file

@ -0,0 +1,214 @@
name: Compile apt-layer
on:
push:
branches: [ main, master ]
paths:
- 'src/apt-layer/**'
- '.forgejo/workflows/compile-apt-layer.yml'
pull_request:
branches: [ main, master ]
paths:
- 'src/apt-layer/**'
- '.forgejo/workflows/compile-apt-layer.yml'
workflow_dispatch: # Allow manual triggering
jobs:
compile:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up environment
run: |
echo "Setting up compilation environment..."
sudo apt-get update
sudo apt-get install -y jq dos2unix bash
- name: Make compile scripts executable
run: |
chmod +x src/apt-layer/compile.sh
chmod +x src/apt-layer/compile-installer.sh
- name: Compile apt-layer
run: |
echo "Compiling apt-layer..."
cd src/apt-layer
./compile.sh -o ../../apt-layer.sh
- name: Compile installer
run: |
echo "Compiling installer with latest paths.json..."
cd src/apt-layer
./compile-installer.sh -o ../../install-apt-layer.sh
- name: Verify compilation
run: |
echo "Verifying compiled scripts..."
# Check apt-layer.sh
if [[ ! -f apt-layer.sh ]]; then
echo "❌ apt-layer compilation failed - apt-layer.sh not found"
exit 1
fi
if [[ ! -x apt-layer.sh ]]; then
echo "❌ apt-layer script is not executable"
exit 1
fi
# Test apt-layer.sh syntax
if bash -n apt-layer.sh; then
echo "✅ apt-layer.sh syntax validation passed"
else
echo "❌ apt-layer.sh syntax validation failed"
exit 1
fi
# Check apt-layer.sh file size
file_size=$(du -h apt-layer.sh | cut -f1)
echo "📦 apt-layer.sh size: $file_size"
# Count apt-layer.sh lines
line_count=$(wc -l < apt-layer.sh)
echo "📄 apt-layer.sh lines: $line_count"
# Check install-apt-layer.sh
if [[ ! -f install-apt-layer.sh ]]; then
echo "❌ Installer compilation failed - install-apt-layer.sh not found"
exit 1
fi
if [[ ! -x install-apt-layer.sh ]]; then
echo "❌ Installer script is not executable"
exit 1
fi
# Test install-apt-layer.sh syntax
if bash -n install-apt-layer.sh; then
echo "✅ install-apt-layer.sh syntax validation passed"
else
echo "❌ install-apt-layer.sh syntax validation failed"
exit 1
fi
# Check install-apt-layer.sh file size
installer_size=$(du -h install-apt-layer.sh | cut -f1)
echo "📦 install-apt-layer.sh size: $installer_size"
# Count install-apt-layer.sh lines
installer_lines=$(wc -l < install-apt-layer.sh)
echo "📄 install-apt-layer.sh lines: $installer_lines"
- name: Test basic functionality
run: |
echo "Testing basic functionality..."
# Test apt-layer.sh help command
if ./apt-layer.sh --help > /dev/null 2>&1; then
echo "✅ apt-layer.sh help command works"
else
echo "❌ apt-layer.sh help command failed"
exit 1
fi
# Test apt-layer.sh version info
if ./apt-layer.sh --version > /dev/null 2>&1; then
echo "✅ apt-layer.sh version command works"
else
echo "⚠️ apt-layer.sh version command not available (this is optional)"
fi
# Test install-apt-layer.sh help command
if ./install-apt-layer.sh --help > /dev/null 2>&1; then
echo "✅ install-apt-layer.sh help command works"
else
echo "❌ install-apt-layer.sh help command failed"
exit 1
fi
- name: Create compilation report
run: |
echo "Creating compilation report..."
{
echo "# apt-layer Compilation Report"
echo ""
echo "**Compilation Date:** $(date)"
echo "**Commit:** ${{ github.sha }}"
echo "**Branch:** ${{ github.ref_name }}"
echo ""
echo "## File Information"
echo ""
echo "### apt-layer.sh"
echo "- **File:** apt-layer.sh"
echo "- **Size:** $(du -h apt-layer.sh | cut -f1)"
echo "- **Lines:** $(wc -l < apt-layer.sh)"
echo "- **Executable:** $(test -x apt-layer.sh && echo "Yes" || echo "No")"
echo ""
echo "### install-apt-layer.sh"
echo "- **File:** install-apt-layer.sh"
echo "- **Size:** $(du -h install-apt-layer.sh | cut -f1)"
echo "- **Lines:** $(wc -l < install-apt-layer.sh)"
echo "- **Executable:** $(test -x install-apt-layer.sh && echo "Yes" || echo "No")"
echo ""
echo "## Validation Results"
echo "- **apt-layer.sh Syntax Check:** $(bash -n apt-layer.sh && echo "✅ Passed" || echo "❌ Failed")"
echo "- **apt-layer.sh Help Command:** $(./apt-layer.sh --help > /dev/null 2>&1 && echo "✅ Works" || echo "❌ Failed")"
echo "- **install-apt-layer.sh Syntax Check:** $(bash -n install-apt-layer.sh && echo "✅ Passed" || echo "❌ Failed")"
echo "- **install-apt-layer.sh Help Command:** $(./install-apt-layer.sh --help > /dev/null 2>&1 && echo "✅ Works" || echo "❌ Failed")"
echo ""
echo "## Included Components"
echo ""
echo "### apt-layer.sh"
echo "- apt-layer configuration integration"
echo "- Transactional operations with automatic rollback"
echo "- Traditional chroot-based layer creation"
echo "- Container-based layer creation (Apx-style)"
echo "- OCI export/import integration"
echo "- Live overlay system (rpm-ostree style)"
echo "- Bootloader integration (UEFI/GRUB/systemd-boot)"
echo "- Atomic deployment system with rollback"
echo "- rpm-ostree compatibility layer"
echo "- ComposeFS backend integration"
echo "- Dependency validation and error handling"
echo "- Comprehensive JSON configuration system"
echo "- Direct dpkg installation (Performance optimization)"
echo ""
echo "### install-apt-layer.sh"
echo "- Latest paths.json configuration embedded"
echo "- All installation and uninstallation functionality"
echo "- Dependency checking and installation"
echo "- System initialization"
echo "- Proper error handling and validation"
echo ""
echo "## Ready for Distribution"
echo "Both compiled scripts are self-contained and ready for use!"
} > COMPILATION_REPORT.md
- name: Upload compilation report
uses: actions/upload-artifact@v4
with:
name: compilation-report
path: COMPILATION_REPORT.md
retention-days: 30
- name: Upload compiled scripts
uses: actions/upload-artifact@v4
with:
name: apt-layer-compiled
path: |
apt-layer.sh
install-apt-layer.sh
retention-days: 30
- name: Commit compiled scripts (if on main/master)
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
run: |
echo "Committing compiled scripts to repository..."
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git add apt-layer.sh install-apt-layer.sh
git diff --staged --quiet || git commit -m "Auto-compile apt-layer and installer from commit ${{ github.sha }}"
git push

View file

@ -0,0 +1,63 @@
name: Test apt-layer Compilation
on:
workflow_dispatch: # Manual trigger only
jobs:
test-compile:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up environment
run: |
echo "Setting up test environment..."
sudo apt-get update
sudo apt-get install -y jq dos2unix bash
- name: Make compile script executable
run: |
chmod +x src/apt-layer/compile.sh
- name: Test compilation
run: |
echo "Testing apt-layer compilation..."
cd src/apt-layer
./compile.sh -o ../../apt-layer-test.sh
- name: Verify test compilation
run: |
echo "Verifying test compilation..."
if [[ ! -f apt-layer-test.sh ]]; then
echo "❌ Test compilation failed"
exit 1
fi
if bash -n apt-layer-test.sh; then
echo "✅ Test compilation syntax validation passed"
else
echo "❌ Test compilation syntax validation failed"
exit 1
fi
echo "📦 Test script size: $(du -h apt-layer-test.sh | cut -f1)"
echo "📄 Test script lines: $(wc -l < apt-layer-test.sh)"
- name: Test basic functionality
run: |
echo "Testing basic functionality..."
if ./apt-layer-test.sh --help > /dev/null 2>&1; then
echo "✅ Test script help command works"
else
echo "❌ Test script help command failed"
exit 1
fi
- name: Upload test script
uses: actions/upload-artifact@v4
with:
name: apt-layer-test
path: apt-layer-test.sh
retention-days: 7

View file

@ -0,0 +1,324 @@
# Particle-OS Path Management Implementation
## Overview
This document describes the implementation of a configuration-based path management system for Particle-OS, designed to eliminate hardcoded paths and provide flexible, maintainable system initialization.
## Problem Statement
The original Particle-OS implementation had several issues:
1. **Hardcoded paths** scattered throughout scriptlets
2. **Inconsistent path references** (ubuntu-ublue vs particle-os)
3. **Manual directory creation** required for system operation
4. **No centralized path configuration** for easy customization
5. **Missing initialization commands** for system setup
## Solution Implementation
### 1. Configuration-Based Path System
#### New Configuration File: `src/apt-layer/config/paths.json`
```json
{
"system_paths": {
"root_dir": "/var/lib/apt-layer",
"config_dir": "/usr/local/etc/apt-layer",
"log_dir": "/var/log/apt-layer",
"cache_dir": "/var/cache/apt-layer",
"temp_dir": "/tmp/apt-layer"
},
"live_overlay_paths": {
"overlay_dir": "/var/lib/apt-layer/live-overlay",
"upper_dir": "/var/lib/apt-layer/live-overlay/upper",
"work_dir": "/var/lib/apt-layer/live-overlay/work",
"mount_point": "/var/lib/apt-layer/live-overlay/mount",
"state_file": "/var/lib/apt-layer/live-overlay.state",
"package_log": "/var/log/apt-layer/live-overlay-packages.log"
},
"deployment_paths": {
"deployments_dir": "/var/lib/apt-layer/deployments",
"backups_dir": "/var/lib/apt-layer/backups",
"images_dir": "/var/lib/apt-layer/images",
"layers_dir": "/var/lib/apt-layer/layers"
},
"bootloader_paths": {
"config_dir": "/etc/apt-layer/bootloader",
"kargs_dir": "/etc/apt-layer/kargs",
"efi_dir": "/boot/efi",
"boot_dir": "/boot"
},
"oci_paths": {
"containers_dir": "/var/lib/apt-layer/containers",
"images_dir": "/var/lib/apt-layer/oci-images",
"exports_dir": "/var/lib/apt-layer/exports"
},
"composefs_paths": {
"images_dir": "/var/lib/apt-layer/composefs",
"cache_dir": "/var/cache/apt-layer/composefs"
},
"fallback_paths": {
"legacy_root": "/var/lib/ubuntu-ublue",
"legacy_config": "/usr/local/etc/ubuntu-ublue",
"legacy_log": "/var/log/ubuntu-ublue"
},
"permissions": {
"root_dir_perms": "755",
"config_dir_perms": "755",
"log_dir_perms": "755",
"cache_dir_perms": "755",
"overlay_upper_perms": "700",
"overlay_work_perms": "700"
}
}
```
### 2. New System Initialization Scriptlet
#### File: `src/apt-layer/scriptlets/08-system-init.sh`
This scriptlet provides comprehensive system initialization functionality:
- **`load_system_paths()`** - Loads paths from JSON configuration with fallbacks
- **`init_particle_system()`** - Creates all necessary directories and sets permissions
- **`reinit_particle_system()`** - Force recreation of system directories
- **`rm_init_particle_system()`** - Complete system cleanup and removal
- **`validate_system_paths()`** - Validates all required directories exist
- **`show_system_status()`** - Displays comprehensive system status
### 3. New Command-Line Interface
#### New Commands Added:
```bash
# Initialize Particle-OS system
sudo apt-layer --init
# Reinitialize Particle-OS system (force recreation)
sudo apt-layer --reinit
# Remove Particle-OS system (cleanup)
sudo apt-layer --rm-init
# Show Particle-OS system status
apt-layer --status
```
### 4. Updated Live Overlay System
The live overlay system now uses configuration-based paths:
- **Automatic directory creation** during initialization
- **Path validation** before overlay operations
- **Consistent path references** throughout the system
- **Fallback mechanisms** for backward compatibility
## Implementation Details
### Path Loading Strategy
1. **Primary**: Load from `paths.json` using `jq` if available
2. **Fallback**: Use default paths if `jq` is not available
3. **Legacy**: Support for legacy ubuntu-ublue paths during transition
### Directory Structure Created
```
/var/lib/apt-layer/
├── live-overlay/
│ ├── upper/
│ ├── work/
│ └── mount/
├── deployments/
├── backups/
├── images/
├── layers/
├── containers/
├── oci-images/
├── exports/
└── composefs/
/var/log/apt-layer/
├── apt-layer.log
└── live-overlay-packages.log
/var/cache/apt-layer/
└── composefs/
/usr/local/etc/apt-layer/
└── paths.json
/etc/apt-layer/
├── bootloader/
└── kargs/
```
### Permission Management
- **Main directories**: 755 (readable by all, writable by owner)
- **Overlay directories**: 700 (private to owner)
- **Log files**: 644 (readable by all, writable by owner)
- **Configuration files**: 644 (readable by all, writable by owner)
## Usage Examples
### Basic System Setup
```bash
# Initialize the system
sudo apt-layer --init
# Check system status
apt-layer --status
# Start live overlay
sudo apt-layer --live-overlay start
```
### System Maintenance
```bash
# Reinitialize system (force recreation)
sudo apt-layer --reinit
# Complete system cleanup
sudo apt-layer --rm-init
```
### Path Customization
1. **Edit configuration**: Modify `src/apt-layer/config/paths.json`
2. **Recompile**: Run `./compile.sh` in `src/apt-layer/`
3. **Reinitialize**: Run `sudo apt-layer --reinit`
## Testing
### Test Script: `test-path-management.sh`
A comprehensive test script validates:
1. **Command availability** - New commands are properly integrated
2. **System initialization** - All directories are created correctly
3. **Path consistency** - Configuration-based paths work throughout
4. **Live overlay functionality** - Overlay system works with new paths
5. **System cleanup** - Removal commands work correctly
### Running Tests
```bash
# Make test script executable
chmod +x test-path-management.sh
# Run comprehensive tests
./test-path-management.sh
```
## Benefits
### 1. Maintainability
- **Centralized configuration** - All paths in one JSON file
- **Easy customization** - Modify paths without code changes
- **Version control friendly** - Configuration changes are trackable
### 2. Reliability
- **Automatic initialization** - No manual directory creation required
- **Path validation** - System checks for required directories
- **Fallback mechanisms** - Graceful degradation if configuration is missing
### 3. Flexibility
- **Customizable paths** - Easy to change system layout
- **Environment-specific configs** - Different paths for different environments
- **Legacy support** - Backward compatibility during transition
### 4. User Experience
- **Simple commands** - Clear, intuitive initialization commands
- **Status reporting** - Comprehensive system status information
- **Error handling** - Clear error messages and recovery options
## Migration Guide
### For Existing Installations
1. **Backup existing data**:
```bash
sudo cp -r /var/lib/ubuntu-ublue /var/lib/ubuntu-ublue.backup
```
2. **Update to new version**:
```bash
git pull
cd src/apt-layer
./compile.sh
```
3. **Initialize new system**:
```bash
sudo apt-layer --init
```
4. **Migrate data** (if needed):
```bash
sudo cp -r /var/lib/ubuntu-ublue.backup/* /var/lib/apt-layer/
```
5. **Verify functionality**:
```bash
apt-layer --status
```
### For New Installations
1. **Clone repository**:
```bash
git clone <repository-url>
cd particle-os-tools
```
2. **Compile system**:
```bash
cd src/apt-layer
./compile.sh
```
3. **Initialize system**:
```bash
sudo apt-layer --init
```
4. **Verify installation**:
```bash
apt-layer --status
```
## Future Enhancements
### Planned Improvements
1. **Environment-specific configurations** - Different paths for development/production
2. **Dynamic path resolution** - Runtime path determination based on environment
3. **Configuration validation** - JSON schema validation for path configurations
4. **Migration tools** - Automated migration from legacy installations
5. **Backup/restore** - System state backup and restoration capabilities
### Configuration Extensions
```json
{
"environment": "production",
"custom_paths": {
"data_dir": "/opt/particle-os/data",
"backup_dir": "/mnt/backup/particle-os"
},
"validation": {
"check_permissions": true,
"check_disk_space": true,
"min_disk_space_gb": 10
}
}
```
## Conclusion
The new path management system provides a robust, maintainable foundation for Particle-OS development. By centralizing path configuration and providing comprehensive initialization tools, the system is now more flexible, reliable, and user-friendly.
The implementation maintains backward compatibility while providing a clear migration path to the new configuration-based approach. The comprehensive testing ensures system reliability and the modular design allows for future enhancements.

File diff suppressed because it is too large Load diff

View file

@ -1,96 +0,0 @@
# ComposeFS Documentation
## Overview
ComposeFS is the immutable filesystem backend for Ubuntu uBlue systems. It provides the foundation for atomic, layered system updates similar to OSTree but using squashfs and overlayfs technologies.
## ComposeFS Alternative Script
The `composefs-alternative.sh` script provides:
- **Image Creation**: Create ComposeFS images from directories
- **Image Mounting**: Mount ComposeFS images for access
- **Image Management**: List, remove, and manage ComposeFS images
- **Status Information**: Get system status and image information
- **Integration API**: Clean interface for other scripts
## Key Features
- **Immutable Images**: Squashfs-based immutable filesystem images
- **Layer Support**: Multiple layers can be combined using overlayfs
- **Atomic Operations**: All operations are atomic and recoverable
- **Efficient Storage**: Deduplication and compression for space efficiency
- **Boot Integration**: Seamless integration with bootloader systems
## Documentation Files
### Core Documentation
- **[composefs-guide.md](composefs-guide.md)** - Comprehensive user guide
- **[composefs-api.md](composefs-api.md)** - API reference and command documentation
- **[composefs-architecture.md](composefs-architecture.md)** - Technical architecture details
### Technical Documentation
- **[composefs-performance.md](composefs-performance.md)** - Performance considerations and optimization
- **[composefs-troubleshooting.md](composefs-troubleshooting.md)** - Common issues and solutions
- **[composefs-migration.md](composefs-migration.md)** - Migration from other filesystem technologies
## Quick Start
```bash
# Create a ComposeFS image
composefs-alternative.sh create my-image /path/to/source
# Mount a ComposeFS image
composefs-alternative.sh mount my-image /mnt/image
# List all images
composefs-alternative.sh list-images
# Get image information
composefs-alternative.sh info my-image
# Remove an image
composefs-alternative.sh remove my-image
```
## Integration
ComposeFS integrates with:
- **apt-layer.sh**: For layer creation and management
- **bootloader-integration.sh**: For boot image registration
- **oci-integration.sh**: For OCI image conversion
- **ublue-config.sh**: For unified configuration
## Architecture
ComposeFS provides:
- **Squashfs Images**: Compressed, immutable filesystem images
- **Overlayfs Layers**: Multiple layers combined for final filesystem
- **Atomic Operations**: All operations are atomic and recoverable
- **Efficient Storage**: Deduplication and compression
- **Boot Compatibility**: Compatible with standard bootloader systems
## Performance Characteristics
- **Read Performance**: Excellent read performance due to compression
- **Write Performance**: Immutable by design, changes create new layers
- **Storage Efficiency**: High compression ratios and deduplication
- **Boot Performance**: Fast boot times with optimized images
## Security Features
- **Immutable Images**: Cannot be modified once created
- **Integrity Verification**: Optional integrity checking
- **Atomic Updates**: All-or-nothing update semantics
- **Rollback Support**: Easy rollback to previous images
## Development Status
ComposeFS is production-ready with:
- ✅ Immutable filesystem support
- ✅ Layer management
- ✅ Atomic operations
- ✅ Boot integration
- ✅ Performance optimization
- ✅ Security features
For more information, see the individual documentation files listed above.

475
install-apt-layer.sh Normal file
View file

@ -0,0 +1,475 @@
#!/bin/bash
# apt-layer Installation Script
# This script installs the apt-layer tool, its dependencies, creates necessary directories and files,
# and sets up the system to use apt-layer. If already installed, it will update the tool.
#
# Usage:
# ./install-apt-layer.sh # Install or update apt-layer
# ./install-apt-layer.sh --uninstall # Remove apt-layer and all its files
# ./install-apt-layer.sh --reinstall # Remove and reinstall (reset to default state)
# ./install-apt-layer.sh --help # Show this help message
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_header() {
echo -e "${BLUE}================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}================================${NC}"
}
# Function to show help
show_help() {
cat << 'EOF'
apt-layer Installation Script
This script installs the apt-layer tool, its dependencies, creates necessary directories and files,
and sets up the system to use apt-layer.
Usage:
./install-apt-layer.sh # Install or update apt-layer
./install-apt-layer.sh --uninstall # Remove apt-layer and all its files
./install-apt-layer.sh --reinstall # Remove and reinstall (reset to default state)
./install-apt-layer.sh --help # Show this help message
What this script does:
- Downloads the latest apt-layer.sh from the repository
- Installs required dependencies (jq, dos2unix, etc.)
- Creates necessary directories (/var/lib/apt-layer, /var/log/apt-layer, etc.)
- Sets up configuration files
- Makes apt-layer executable and available system-wide
- Initializes the apt-layer system
Dependencies:
- curl or wget (for downloading)
- jq (for JSON processing)
- dos2unix (for Windows line ending conversion)
- sudo (for system installation)
EOF
}
# Function to check if running as root
check_root() {
if [[ $EUID -eq 0 ]]; then
print_error "This script should not be run as root. Use sudo for specific commands."
exit 1
fi
}
# Function to check dependencies
check_dependencies() {
print_status "Checking dependencies..."
local missing_deps=()
# Check for curl or wget
if ! command -v curl >/dev/null 2>&1 && ! command -v wget >/dev/null 2>&1; then
missing_deps+=("curl or wget")
fi
# Check for jq
if ! command -v jq >/dev/null 2>&1; then
missing_deps+=("jq")
fi
# Check for dos2unix
if ! command -v dos2unix >/dev/null 2>&1; then
missing_deps+=("dos2unix")
fi
if [[ ${#missing_deps[@]} -gt 0 ]]; then
print_error "Missing required dependencies: ${missing_deps[*]}"
print_status "Installing missing dependencies..."
# Try to install dependencies
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update
sudo apt-get install -y "${missing_deps[@]}"
elif command -v dnf >/dev/null 2>&1; then
sudo dnf install -y "${missing_deps[@]}"
elif command -v yum >/dev/null 2>&1; then
sudo yum install -y "${missing_deps[@]}"
else
print_error "Could not automatically install dependencies. Please install manually:"
print_error " ${missing_deps[*]}"
exit 1
fi
fi
print_success "All dependencies satisfied"
}
# Function to download apt-layer
download_apt_layer() {
print_status "Downloading apt-layer..."
local download_url="https://git.raines.xyz/robojerk/particle-os-tools/raw/branch/main/apt-layer.sh"
local temp_file="/tmp/apt-layer.sh"
# Download using curl or wget
if command -v curl >/dev/null 2>&1; then
if curl -L -o "$temp_file" "$download_url"; then
print_success "Downloaded apt-layer using curl"
else
print_error "Failed to download apt-layer using curl"
return 1
fi
elif command -v wget >/dev/null 2>&1; then
if wget -O "$temp_file" "$download_url"; then
print_success "Downloaded apt-layer using wget"
else
print_error "Failed to download apt-layer using wget"
return 1
fi
else
print_error "No download tool available (curl or wget)"
return 1
fi
# Verify the downloaded file
if [[ ! -f "$temp_file" ]] || [[ ! -s "$temp_file" ]]; then
print_error "Downloaded file is empty or missing"
return 1
fi
# Convert line endings if needed
if command -v dos2unix >/dev/null 2>&1; then
dos2unix "$temp_file" 2>/dev/null || true
fi
# Make executable
chmod +x "$temp_file"
print_success "apt-layer downloaded and prepared"
}
# Function to install apt-layer
install_apt_layer() {
print_status "Installing apt-layer..."
local temp_file="/tmp/apt-layer.sh"
local install_dir="/usr/local/bin"
local config_dir="/usr/local/etc/apt-layer"
# Create installation directory if it doesn't exist
sudo mkdir -p "$install_dir"
# Install apt-layer
sudo cp "$temp_file" "$install_dir/apt-layer"
sudo chmod +x "$install_dir/apt-layer"
# Create configuration directory
sudo mkdir -p "$config_dir"
# Create paths.json configuration
sudo tee "$config_dir/paths.json" >/dev/null << 'EOF'
{
"apt_layer_paths": {
"description": "apt-layer system path configuration",
"version": "1.0",
"main_directories": {
"workspace": {
"path": "/var/lib/apt-layer",
"description": "Main apt-layer workspace directory",
"permissions": "755",
"owner": "root:root"
},
"logs": {
"path": "/var/log/apt-layer",
"description": "apt-layer log files directory",
"permissions": "755",
"owner": "root:root"
},
"cache": {
"path": "/var/cache/apt-layer",
"description": "apt-layer cache directory",
"permissions": "755",
"owner": "root:root"
}
},
"workspace_subdirectories": {
"build": {
"path": "/var/lib/apt-layer/build",
"description": "Build artifacts and temporary files",
"permissions": "755",
"owner": "root:root"
},
"live_overlay": {
"path": "/var/lib/apt-layer/live-overlay",
"description": "Live overlay system for temporary changes",
"permissions": "755",
"owner": "root:root"
},
"composefs": {
"path": "/var/lib/apt-layer/composefs",
"description": "ComposeFS layers and metadata",
"permissions": "755",
"owner": "root:root"
},
"ostree_commits": {
"path": "/var/lib/apt-layer/ostree-commits",
"description": "OSTree commit history and metadata",
"permissions": "755",
"owner": "root:root"
},
"deployments": {
"path": "/var/lib/apt-layer/deployments",
"description": "Deployment directories and state",
"permissions": "755",
"owner": "root:root"
},
"history": {
"path": "/var/lib/apt-layer/history",
"description": "Deployment history and rollback data",
"permissions": "755",
"owner": "root:root"
},
"bootloader": {
"path": "/var/lib/apt-layer/bootloader",
"description": "Bootloader state and configuration",
"permissions": "755",
"owner": "root:root"
},
"transaction_state": {
"path": "/var/lib/apt-layer/transaction-state",
"description": "Transaction state and temporary data",
"permissions": "755",
"owner": "root:root"
}
},
"files": {
"deployment_db": {
"path": "/var/lib/apt-layer/deployments.json",
"description": "Deployment database file",
"permissions": "644",
"owner": "root:root"
},
"current_deployment": {
"path": "/var/lib/apt-layer/current-deployment",
"description": "Current deployment identifier file",
"permissions": "644",
"owner": "root:root"
},
"pending_deployment": {
"path": "/var/lib/apt-layer/pending-deployment",
"description": "Pending deployment identifier file",
"permissions": "644",
"owner": "root:root"
},
"transaction_log": {
"path": "/var/lib/apt-layer/transaction.log",
"description": "Transaction log file",
"permissions": "644",
"owner": "root:root"
}
},
"fallback_paths": {
"description": "Fallback paths for different environments",
"wsl": {
"workspace": "/mnt/wsl/apt-layer",
"logs": "/mnt/wsl/apt-layer/logs",
"cache": "/mnt/wsl/apt-layer/cache"
},
"container": {
"workspace": "/tmp/apt-layer",
"logs": "/tmp/apt-layer/logs",
"cache": "/tmp/apt-layer/cache"
},
"test": {
"workspace": "/tmp/apt-layer-test",
"logs": "/tmp/apt-layer-test/logs",
"cache": "/tmp/apt-layer-test/cache"
}
},
"environment_variables": {
"description": "Environment variable mappings",
"APT_LAYER_WORKSPACE": "workspace",
"APT_LAYER_LOG_DIR": "logs",
"APT_LAYER_CACHE_DIR": "cache",
"BUILD_DIR": "workspace_subdirectories.build",
"LIVE_OVERLAY_DIR": "workspace_subdirectories.live_overlay",
"COMPOSEFS_DIR": "workspace_subdirectories.composefs",
"OSTREE_COMMITS_DIR": "workspace_subdirectories.ostree_commits",
"DEPLOYMENTS_DIR": "workspace_subdirectories.deployments",
"HISTORY_DIR": "workspace_subdirectories.history",
"BOOTLOADER_STATE_DIR": "workspace_subdirectories.bootloader",
"TRANSACTION_STATE": "workspace_subdirectories.transaction_state",
"DEPLOYMENT_DB": "files.deployment_db",
"CURRENT_DEPLOYMENT_FILE": "files.current_deployment",
"PENDING_DEPLOYMENT_FILE": "files.pending_deployment",
"TRANSACTION_LOG": "files.transaction_log"
}
}
}
EOF
# Set proper permissions
sudo chmod 644 "$config_dir/paths.json"
# Initialize apt-layer system
print_status "Initializing apt-layer system..."
if sudo apt-layer --init; then
print_success "apt-layer system initialized"
else
print_warning "apt-layer system initialization failed, but installation completed"
fi
# Clean up temporary file
rm -f "$temp_file"
print_success "apt-layer installed successfully"
print_status "You can now use 'apt-layer --help' to see available commands"
}
# Function to uninstall apt-layer
uninstall_apt_layer() {
print_status "Uninstalling apt-layer..."
local install_dir="/usr/local/bin"
local config_dir="/usr/local/etc/apt-layer"
# Remove apt-layer binary
if [[ -f "$install_dir/apt-layer" ]]; then
sudo rm -f "$install_dir/apt-layer"
print_status "Removed apt-layer binary"
fi
# Remove configuration directory
if [[ -d "$config_dir" ]]; then
sudo rm -rf "$config_dir"
print_status "Removed configuration directory"
fi
# Remove apt-layer directories (optional - ask user)
if [[ -d "/var/lib/apt-layer" ]] || [[ -d "/var/log/apt-layer" ]] || [[ -d "/var/cache/apt-layer" ]]; then
echo -e "${YELLOW}Do you want to remove all apt-layer data directories? (y/N)${NC}"
read -r response
if [[ "$response" =~ ^[Yy]$ ]]; then
sudo rm -rf "/var/lib/apt-layer" "/var/log/apt-layer" "/var/cache/apt-layer"
print_status "Removed all apt-layer data directories"
else
print_status "Keeping apt-layer data directories"
fi
fi
print_success "apt-layer uninstalled successfully"
}
# Function to reinstall apt-layer
reinstall_apt_layer() {
print_status "Reinstalling apt-layer..."
# Uninstall first
uninstall_apt_layer
# Install again
install_apt_layer
print_success "apt-layer reinstalled successfully"
}
# Function to check if apt-layer is installed
is_apt_layer_installed() {
command -v apt-layer >/dev/null 2>&1
}
# Function to check if apt-layer is up to date
check_for_updates() {
print_status "Checking for updates..."
if ! is_apt_layer_installed; then
return 0 # Not installed, so no update needed
fi
# For now, we'll always download the latest version
# In the future, this could check version numbers or timestamps
return 1 # Update needed
}
# Main installation function
main_install() {
print_header "apt-layer Installation"
# Check if running as root
check_root
# Check dependencies
check_dependencies
# Check if already installed
if is_apt_layer_installed; then
print_status "apt-layer is already installed"
if check_for_updates; then
print_status "apt-layer is up to date"
return 0
else
print_status "Updating apt-layer..."
fi
else
print_status "Installing apt-layer..."
fi
# Download and install
if download_apt_layer && install_apt_layer; then
print_success "apt-layer installation completed successfully"
print_status "You can now use 'apt-layer --help' to see available commands"
return 0
else
print_error "apt-layer installation failed"
return 1
fi
}
# Main function
main() {
case "${1:-}" in
--help|-h)
show_help
exit 0
;;
--uninstall)
print_header "apt-layer Uninstallation"
uninstall_apt_layer
exit 0
;;
--reinstall)
print_header "apt-layer Reinstallation"
reinstall_apt_layer
exit 0
;;
"")
main_install
exit $?
;;
*)
print_error "Unknown option: $1"
show_help
exit 1
;;
esac
}
# Run main function with all arguments
main "$@"

View file

@ -0,0 +1,297 @@
# AKMODS and DKMS Integration Architecture
## Overview
This document clarifies the role of DKMS in the AKMODS architecture, addressing the relationship between AKMODS (which creates pre-built packages) and DKMS (which builds modules on-demand).
## AKMODS vs DKMS: Complementary Approaches
### AKMODS Approach (Primary)
- **Pre-built Packages**: AKMODS creates fully compiled `.deb` packages with kernel modules
- **OCI Distribution**: Packages are distributed via container registry
- **Atomic Installation**: Direct installation via `dpkg` with atomic operations
- **No Runtime Compilation**: Modules are pre-compiled and ready to use
### DKMS Approach (Fallback)
- **On-demand Building**: DKMS builds modules when needed on the target system
- **Source Distribution**: Distributes source code and build instructions
- **Runtime Compilation**: Compiles modules against the running kernel
- **Traditional Method**: Standard approach for kernel modules
## AKMODS Architecture Clarification
### Build-Time DKMS Usage
```bash
# AKMODS build container uses DKMS for compilation
FROM ubuntu:24.04
# Install build dependencies including DKMS
RUN apt-get update && apt-get install -y \
build-essential \
linux-headers-generic \
dkms \ # ← DKMS used during build
debhelper \
dh-dkms \ # ← DKMS packaging tools
kernel-package \
fakeroot
# AKMODS uses DKMS to compile modules in container
akmods-apt-daemon build-nvidia-akmods nvidia-driver-535 6.8.0-25-generic
├── 1. Extract .run file contents
├── 2. Apply kernel compatibility patches
├── 3. Use DKMS to compile modules # ← DKMS used here
├── 4. Extract compiled .ko files
├── 5. Create DEB package
└── 6. Upload to registry
```
### Runtime Package Structure
```bash
# Final AKMODS package (NO DKMS runtime dependency)
kmod-nvidia-535-6.8.0-25-generic_535.154.05-1_amd64.deb
├── control.tar.gz
│ ├── control # Package metadata
│ ├── postinst # Post-installation script
│ └── prerm # Pre-removal script
├── data.tar.gz
│ └── lib/modules/6.8.0-25-generic/extra/
│ ├── nvidia.ko # ← Pre-compiled kernel module
│ ├── nvidia-modeset.ko
│ ├── nvidia-drm.ko
│ ├── nvidia-uvm.ko
│ └── nvidia-peermem.ko
└── debian-binary
```
### Corrected debian/control
```bash
# Corrected debian/control (removed DKMS runtime dependency)
Package: kmod-nvidia-535
Version: 535.154.05-1
Architecture: amd64
Depends:
linux-headers-generic, # ← For module loading verification
nvidia-settings, # ← User-space utilities
nvidia-prime # ← GPU switching support
# Removed: dkms (not needed at runtime)
Description: NVIDIA proprietary drivers (535.154.05)
Pre-compiled kernel modules for NVIDIA graphics drivers version 535.154.05.
Supports RTX 40/30 series and compatible hardware.
```
## AKMODS Build Process with DKMS
### Step 1: DKMS Source Package Creation
```bash
# AKMODS creates DKMS-compatible source structure
akmods-apt-daemon create-dkms-source nvidia-driver-535 535.154.05
├── 1. Extract .run file contents
├── 2. Create DKMS source structure
│ ├── /usr/src/nvidia-driver-535-535.154.05/
│ │ ├── dkms.conf
│ │ ├── Makefile
│ │ ├── nvidia.c
│ │ ├── nvidia-modeset.c
│ │ └── ...
│ └── Apply kernel compatibility patches
└── 3. Prepare for DKMS compilation
```
### Step 2: DKMS Compilation in Container
```bash
# AKMODS uses DKMS to compile modules
akmods-apt-daemon compile-with-dkms nvidia-driver-535 6.8.0-25-generic
├── 1. Set up DKMS environment
│ ├── Install kernel headers
│ ├── Configure DKMS
│ └── Set up build environment
├── 2. DKMS compilation
│ ├── dkms add nvidia-driver-535/535.154.05
│ ├── dkms build nvidia-driver-535/535.154.05 -k 6.8.0-25-generic
│ └── dkms install nvidia-driver-535/535.154.05 -k 6.8.0-25-generic
├── 3. Extract compiled modules
│ ├── Copy .ko files from DKMS build directory
│ ├── Verify module compilation
│ └── Generate module metadata
└── 4. Clean up DKMS environment
├── dkms remove nvidia-driver-535/535.154.05 -k 6.8.0-25-generic
└── Clean build artifacts
```
### Step 3: Package Creation (No DKMS)
```bash
# AKMODS creates DEB package without DKMS dependency
akmods-apt-daemon create-deb-package nvidia-driver-535
├── 1. Create DEB package structure
│ ├── Package compiled .ko files
│ ├── Add post-installation scripts
│ ├── Generate package metadata
│ └── Create package manifest
├── 2. Add installation scripts
│ ├── postinst: Load modules, update initramfs
│ ├── prerm: Unload modules, cleanup
│ └── triggers: Handle kernel updates
└── 3. Final package
├── kmod-nvidia-535-6.8.0-25-generic_535.154.05-1_amd64.deb
└── Ready for distribution
```
## AKMODS vs DKMS: When to Use Each
### AKMODS (Recommended)
```bash
# Use AKMODS for:
# - Pre-built, tested packages
# - Fast installation
# - Atomic operations
# - Immutable system compatibility
apt-layer --akmods-install nvidia-driver-535
├── Download pre-built package from registry
├── Install via dpkg
├── Load kernel modules
└── Configure system
```
### DKMS (Fallback)
```bash
# Use DKMS for:
# - Custom kernel modules
# - Experimental drivers
# - Hardware not supported by AKMODS
# - Development and testing
apt-layer --dkms-install custom-module
├── Install DKMS source package
├── Build module on target system
├── Install compiled module
└── Configure module loading
```
## Hybrid Approach: AKMODS with DKMS Fallback
### Primary: AKMODS Pre-built Packages
```bash
# AKMODS primary workflow
akmods-apt-daemon install-nvidia nvidia-driver-535
├── 1. Check for pre-built package
│ ├── Query OCI registry
│ ├── Check kernel compatibility
│ └── Verify package availability
├── 2. Install pre-built package
│ ├── Download kmod-nvidia-535 package
│ ├── Install via dpkg
│ ├── Load kernel modules
│ └── Configure system
└── 3. Verify installation
├── Test module loading
├── Verify GPU functionality
└── Log success
```
### Fallback: DKMS On-demand Building
```bash
# DKMS fallback when no pre-built package available
akmods-apt-daemon install-nvidia-fallback nvidia-driver-535
├── 1. Check for pre-built package (failed)
├── 2. Fall back to DKMS
│ ├── Install nvidia-driver-535-dkms package
│ ├── Build module with DKMS
│ ├── Install compiled module
│ └── Configure system
├── 3. Create AKMODS package for future use
│ ├── Extract compiled modules
│ ├── Create DEB package
│ ├── Upload to registry
│ └── Update package index
└── 4. Verify installation
├── Test module loading
├── Verify GPU functionality
└── Log success
```
## AKMODS Configuration for DKMS Integration
### AKMODS Configuration
```yaml
# akmods.yaml with DKMS integration
module: nvidia-driver-535
version: "535.154.05"
description: "NVIDIA proprietary drivers (RTX 40/30 series)"
build:
method: "dkms" # ← Use DKMS for compilation
dependencies:
- "build-essential"
- "linux-headers-generic"
- "dkms" # ← Build-time dependency
- "fakeroot"
- "debhelper"
dkms_config:
enabled: true
source_structure: "nvidia-run"
patch_method: "kernel-compatibility"
build_timeout: 3600
runtime_dependencies: # ← Runtime dependencies (no DKMS)
- "linux-headers-generic"
- "nvidia-settings"
- "nvidia-prime"
distribution:
method: "pre-built-deb" # ← Distribute as pre-built package
registry: "registry.particle-os.org/akmods"
fallback: "dkms" # ← Fallback to DKMS if needed
```
### DKMS Configuration File
```bash
# dkms.conf for AKMODS build process
PACKAGE_NAME="nvidia-driver-535"
PACKAGE_VERSION="535.154.05"
BUILT_MODULE_NAME[0]="nvidia"
BUILT_MODULE_NAME[1]="nvidia-modeset"
BUILT_MODULE_NAME[2]="nvidia-drm"
BUILT_MODULE_NAME[3]="nvidia-uvm"
BUILT_MODULE_NAME[4]="nvidia-peermem"
DEST_MODULE_LOCATION[0]="/kernel/drivers/video"
DEST_MODULE_LOCATION[1]="/kernel/drivers/video"
DEST_MODULE_LOCATION[2]="/kernel/drivers/gpu/drm"
DEST_MODULE_LOCATION[3]="/kernel/drivers/gpu"
DEST_MODULE_LOCATION[4]="/kernel/drivers/gpu"
AUTOINSTALL="yes"
MAKE[0]="make -C ${kernel_source_dir} M=${dkms_tree}/${PACKAGE_NAME}/${PACKAGE_VERSION}/build modules"
CLEAN="make -C ${kernel_source_dir} M=${dkms_tree}/${PACKAGE_NAME}/${PACKAGE_VERSION}/build clean"
```
## Benefits of This Architecture
### 1. **Best of Both Worlds**
- **AKMODS**: Fast, reliable, pre-built packages
- **DKMS**: Flexible, on-demand building when needed
### 2. **Reduced Complexity**
- **Build-time**: Use DKMS for reliable compilation
- **Runtime**: Simple package installation without DKMS overhead
### 3. **Fallback Safety**
- **Primary**: AKMODS pre-built packages
- **Fallback**: DKMS when pre-built packages unavailable
### 4. **Community Compatibility**
- **Leverage**: Existing DKMS infrastructure and expertise
- **Extend**: With AKMODS pre-built distribution
## Conclusion
The AKMODS architecture uses DKMS as a **build-time tool** for reliable kernel module compilation, but distributes **pre-built packages** that don't require DKMS at runtime. This approach:
1. **Leverages DKMS expertise** for compilation
2. **Eliminates runtime DKMS dependency** for faster installation
3. **Provides fallback mechanisms** when pre-built packages aren't available
4. **Maintains compatibility** with existing DKMS workflows
This clarification resolves the ambiguity identified by the reviewer and provides a clear, practical architecture that combines the strengths of both approaches.

625
src/akmods-apt/nvidia.md Normal file
View file

@ -0,0 +1,625 @@
# NVIDIA Driver Preparation with AKMODS
## Overview
This document describes how Particle-OS AKMODS prepares and distributes NVIDIA drivers, addressing the challenge of outdated drivers in Ubuntu/Debian repositories by providing automated, up-to-date kernel module packages.
## NVIDIA Driver Sources
### 1. Ubuntu/Debian Repository Drivers
- **Current State**: Often outdated (e.g., 470 series on Ubuntu 24.04)
- **Limitations**: Limited to older GPU generations
- **Update Cycle**: Tied to Ubuntu release schedule
- **AKMODS Approach**: Use as fallback for legacy hardware
### 2. Graphics Drivers PPA
- **Source**: `ppa:graphics-drivers/ppa`
- **Advantage**: More recent driver versions
- **Limitation**: Still tied to Ubuntu packaging schedule
- **AKMODS Approach**: Extract and repackage as AKMODS modules
### 3. Official NVIDIA .run Files
- **Source**: NVIDIA official website
- **Advantage**: Latest driver versions available
- **Challenge**: Manual installation, kernel compatibility issues
- **AKMODS Approach**: Automated extraction and packaging
### 4. nvidia-open Drivers
- **Source**: Open source NVIDIA drivers
- **Advantage**: Better for Turing+ hardware, open source
- **Limitation**: Limited hardware support
- **AKMODS Approach**: Primary choice for supported hardware
## AKMODS NVIDIA Driver Strategy
### Driver Selection Matrix
```yaml
# NVIDIA Driver Selection Strategy
hardware_generation:
rtx_50_series:
primary: "nvidia-open"
fallback: "nvidia-driver-550"
source: "official-run"
rtx_40_series:
primary: "nvidia-open"
fallback: "nvidia-driver-535"
source: "official-run"
rtx_30_series:
primary: "nvidia-open"
fallback: "nvidia-driver-535"
source: "official-run"
rtx_20_series:
primary: "nvidia-driver-535"
fallback: "nvidia-driver-470"
source: "ppa"
gtx_16_series:
primary: "nvidia-driver-535"
fallback: "nvidia-driver-470"
source: "ppa"
gtx_10_series:
primary: "nvidia-driver-470"
fallback: "nvidia-driver-390"
source: "repository"
legacy_series:
primary: "nvidia-driver-390"
fallback: "nouveau"
source: "repository"
```
## AKMODS NVIDIA Driver Preparation
### 1. Official NVIDIA .run File Processing
#### AKMODS Source Package Structure
```bash
akmod-nvidia-535-source/
├── debian/
│ ├── control # Build dependencies
│ ├── rules # Build instructions
│ ├── akmods.conf # AKMODS configuration
│ └── patches/ # Kernel compatibility patches
├── src/
│ ├── nvidia-driver-535/ # Extracted driver source
│ ├── nvidia-installer # Modified installer
│ └── kernel-patches/ # Kernel-specific patches
├── scripts/
│ ├── extract-driver.sh # Extract .run file
│ ├── apply-patches.sh # Apply kernel patches
│ └── build-modules.sh # Build kernel modules
└── akmods.yaml # Module configuration
```
#### Driver Extraction Process
```bash
# AKMODS driver extraction workflow
akmods-apt-daemon extract-nvidia nvidia-driver-535
├── 1. Download .run file
│ ├── Fetch from NVIDIA servers
│ ├── Verify checksum
│ └── Extract to temporary directory
├── 2. Extract Driver Components
│ ├── Extract kernel modules
│ ├── Extract user-space libraries
│ ├── Extract configuration files
│ └── Extract documentation
├── 3. Prepare AKMODS Package
│ ├── Create source package structure
│ ├── Apply kernel compatibility patches
│ ├── Generate build instructions
│ └── Create module configuration
└── 4. Build AKMODS Module
├── Compile kernel modules
├── Create DEB package
├── Sign package
└── Upload to registry
```
#### AKMODS Configuration for NVIDIA Drivers
```yaml
# akmods.yaml for NVIDIA drivers
module: nvidia-driver-535
version: "535.154.05"
description: "NVIDIA proprietary drivers (RTX 40/30 series)"
source:
type: "nvidia-run"
url: "https://us.download.nvidia.com/XFree86/Linux-x86_64/535.154.05/NVIDIA-Linux-x86_64-535.154.05.run"
checksum: "sha256:abc123..."
build:
dependencies:
- "build-essential"
- "linux-headers-generic"
- "dkms"
- "fakeroot"
- "debhelper"
patches:
- "kernel-5.15.patch"
- "kernel-5.16.patch"
- "kernel-5.17.patch"
modules:
- "nvidia"
- "nvidia-modeset"
- "nvidia-drm"
- "nvidia-uvm"
- "nvidia-peermem"
post_install:
- "update-initramfs -u"
- "update-grub"
supported_kernels:
- "5.15.*"
- "5.16.*"
- "5.17.*"
- "5.18.*"
- "5.19.*"
```
### 2. PPA Driver Integration
#### PPA Driver Extraction
```bash
# AKMODS PPA driver processing
akmods-apt-daemon extract-ppa nvidia-driver-535
├── 1. Add PPA Repository
│ ├── Add graphics-drivers PPA
│ ├── Update package lists
│ └── Download source packages
├── 2. Extract Source Packages
│ ├── Download .dsc files
│ ├── Extract source code
│ ├── Extract patches
│ └── Extract build instructions
├── 3. Convert to AKMODS
│ ├── Create AKMODS source package
│ ├── Adapt build instructions
│ ├── Apply kernel patches
│ └── Generate module configuration
└── 4. Build and Distribute
├── Build kernel modules
├── Create DEB packages
├── Sign packages
└── Upload to registry
```
#### PPA Integration Configuration
```yaml
# PPA integration configuration
ppa_sources:
graphics_drivers:
url: "ppa:graphics-drivers/ppa"
packages:
- "nvidia-driver-535"
- "nvidia-driver-470"
- "nvidia-driver-390"
nvidia_open:
url: "ppa:graphics-drivers/ppa"
packages:
- "nvidia-open"
- "nvidia-open-dkms"
```
### 3. nvidia-open Driver Support
#### nvidia-open Overview
- **Purpose**: Open source NVIDIA drivers for Turing+ hardware
- **Advantages**: Better performance, open source, Wayland support
- **Hardware Support**: RTX 20/30/40/50 series, GTX 16 series
- **Limitations**: No CUDA support, limited legacy hardware support
#### AKMODS nvidia-open Configuration
```yaml
# nvidia-open AKMODS configuration
module: nvidia-open
version: "latest"
description: "Open source NVIDIA drivers (Turing+)"
source:
type: "ppa"
url: "ppa:graphics-drivers/ppa"
package: "nvidia-open-dkms"
build:
dependencies:
- "build-essential"
- "linux-headers-generic"
- "dkms"
- "git"
- "mesa-utils"
modules:
- "nvidia"
- "nvidia-modeset"
- "nvidia-drm"
features:
- "wayland-support"
- "gpu-reset"
- "open-source"
- "no-cuda"
supported_hardware:
- "RTX 50 Series"
- "RTX 40 Series"
- "RTX 30 Series"
- "RTX 20 Series"
- "GTX 16 Series"
kernel_versions:
- "5.15.*"
- "5.16.*"
- "5.17.*"
- "5.18.*"
- "5.19.*"
- "6.0.*"
- "6.1.*"
- "6.2.*"
- "6.3.*"
- "6.4.*"
- "6.5.*"
- "6.6.*"
- "6.7.*"
- "6.8.*"
```
#### nvidia-open Build Process
```bash
# nvidia-open AKMODS build process
akmods-apt-daemon build nvidia-open 6.8.0-25-generic
├── 1. Source Preparation
│ ├── Clone nvidia-open repository
│ ├── Checkout stable branch
│ ├── Apply kernel patches
│ └── Configure build environment
├── 2. Module Compilation
│ ├── Compile nvidia.ko
│ ├── Compile nvidia-modeset.ko
│ ├── Compile nvidia-drm.ko
│ └── Generate module metadata
├── 3. Package Creation
│ ├── Create DEB package structure
│ ├── Add post-installation scripts
│ ├── Configure module loading
│ └── Sign package
└── 4. Distribution
├── Upload to OCI registry
├── Update package index
└── Notify completion
```
## AKMODS NVIDIA Driver Management
### 1. Automatic Driver Selection
#### Hardware Detection and Driver Selection
```bash
# AKMODS automatic driver selection
akmods-apt-daemon auto-select-nvidia
├── 1. Hardware Detection
│ ├── Detect GPU model
│ ├── Check GPU capabilities
│ ├── Identify hardware generation
│ └── Check current driver status
├── 2. Driver Selection
│ ├── Query available drivers
│ ├── Check compatibility
│ ├── Select optimal driver
│ └── Validate selection
├── 3. Installation
│ ├── Download selected driver
│ ├── Install kernel modules
│ ├── Configure system
│ └── Test installation
└── 4. Verification
├── Verify module loading
├── Test GPU functionality
├── Check performance
└── Log results
```
#### Driver Selection Logic
```python
# AKMODS driver selection algorithm
def select_nvidia_driver(gpu_info):
if gpu_info.generation >= "turing":
if gpu_info.series in ["rtx_50", "rtx_40", "rtx_30"]:
return "nvidia-open" # Best for modern hardware
else:
return "nvidia-driver-535" # Proprietary for older Turing
elif gpu_info.generation == "pascal":
return "nvidia-driver-470" # Legacy support
elif gpu_info.generation == "maxwell":
return "nvidia-driver-390" # Very legacy support
else:
return "nouveau" # Open source fallback
```
### 2. Driver Update Management
#### Automated Driver Updates
```bash
# AKMODS driver update workflow
akmods-apt-daemon update-nvidia-drivers
├── 1. Check for Updates
│ ├── Monitor NVIDIA releases
│ ├── Check PPA updates
│ ├── Verify kernel compatibility
│ └── Assess update necessity
├── 2. Build New Drivers
│ ├── Download new sources
│ ├── Apply compatibility patches
│ ├── Build kernel modules
│ └── Create packages
├── 3. Test Updates
│ ├── Test in isolated environment
│ ├── Verify compatibility
│ ├── Performance testing
│ └── Stability validation
└── 4. Deploy Updates
├── Upload to registry
├── Notify systems
├── Schedule updates
└── Monitor deployment
```
### 3. Multi-Driver Support
#### Driver Switching and Management
```bash
# AKMODS multi-driver management
akmods-apt-daemon switch-nvidia-driver nvidia-open
├── 1. Driver Validation
│ ├── Check hardware compatibility
│ ├── Verify kernel support
│ ├── Test driver functionality
│ └── Backup current driver
├── 2. Driver Installation
│ ├── Install new driver modules
│ ├── Update system configuration
│ ├── Configure module loading
│ └── Update boot configuration
├── 3. System Configuration
│ ├── Update X11 configuration
│ ├── Configure Wayland support
│ ├── Set up GPU switching
│ └── Configure power management
└── 4. Verification
├── Test GPU functionality
├── Verify performance
├── Check stability
└── Rollback if needed
```
## AKMODS NVIDIA Gaming Variants
### 1. Particle-OS Bazzite Gaming (NVIDIA AKMODS)
#### Variant Configuration
```json
{
"name": "particle-os-bazzite-gaming-nvidia-akmods",
"base": "ubuntu-base/25.04",
"description": "Gaming-focused variant with AKMODS NVIDIA support",
"akmods_modules": [
"nvidia-open",
"v4l2loopback",
"gpd-fan-kmod"
],
"packages": [
"steam",
"wine",
"lutris",
"gamemode",
"mangohud",
"proton-ge-custom",
"nvidia-settings",
"nvidia-prime"
],
"repositories": [
"nvidia",
"steam",
"lutris"
],
"configurations": [
"gaming-performance",
"nvidia-prime",
"steam-integration",
"wayland-gaming"
],
"nvidia_config": {
"primary_driver": "nvidia-open",
"fallback_driver": "nvidia-driver-535",
"auto_switch": true,
"gaming_optimizations": true,
"wayland_support": true
}
}
```
### 2. Particle-OS Corona Gaming (NVIDIA AKMODS)
#### Variant Configuration
```json
{
"name": "particle-os-corona-gaming-nvidia-akmods",
"base": "ubuntu-base/24.04",
"description": "KDE Plasma gaming variant with AKMODS NVIDIA support",
"akmods_modules": [
"nvidia-open",
"v4l2loopback"
],
"packages": [
"kde-plasma-desktop",
"steam",
"wine",
"gamemode",
"nvidia-settings",
"nvidia-prime"
],
"repositories": [
"nvidia",
"steam"
],
"configurations": [
"kde-gaming",
"nvidia-prime",
"steam-integration"
],
"nvidia_config": {
"primary_driver": "nvidia-open",
"fallback_driver": "nvidia-driver-535",
"auto_switch": true,
"kde_integration": true
}
}
```
## AKMODS NVIDIA Performance Optimization
### 1. Gaming Performance Tuning
#### AKMODS Performance Configuration
```yaml
# AKMODS performance tuning
performance:
nvidia_open:
kernel_parameters:
- "nvidia.NVreg_UsePageAttributeTable=1"
- "nvidia.NVreg_EnablePCIeGen3=1"
- "nvidia.NVreg_InitializeSystemMemoryAllocations=1"
x11_config:
- "Option \"TripleBuffer\" \"true\""
- "Option \"AllowIndirectGLXProtocol\" \"off\""
- "Option \"TripleBuffer\" \"true\""
environment_variables:
- "__GL_SYNC_TO_VBLANK=0"
- "__GL_THREADED_OPTIMIZATIONS=1"
- "__GL_YIELD=NOTHING"
- "DRI_PRIME=1"
nvidia_proprietary:
kernel_parameters:
- "nvidia.NVreg_UsePageAttributeTable=1"
- "nvidia.NVreg_EnablePCIeGen3=1"
- "nvidia.NVreg_InitializeSystemMemoryAllocations=1"
x11_config:
- "Option \"TripleBuffer\" \"true\""
- "Option \"AllowIndirectGLXProtocol\" \"off\""
- "Option \"TripleBuffer\" \"true\""
environment_variables:
- "__GL_SYNC_TO_VBLANK=0"
- "__GL_THREADED_OPTIMIZATIONS=1"
- "__GL_YIELD=NOTHING"
- "DRI_PRIME=1"
```
### 2. Power Management
#### AKMODS Power Management
```yaml
# AKMODS power management
power_management:
nvidia_open:
power_modes:
- "performance"
- "balanced"
- "power_save"
auto_switch:
enabled: true
ac_power: "performance"
battery_power: "balanced"
thermal_management:
fan_control: true
thermal_threshold: 85
thermal_action: "throttle"
nvidia_proprietary:
power_modes:
- "prefer_maximum_performance"
- "adaptive"
- "auto"
auto_switch:
enabled: true
ac_power: "prefer_maximum_performance"
battery_power: "adaptive"
thermal_management:
fan_control: true
thermal_threshold: 85
thermal_action: "throttle"
```
## AKMODS NVIDIA Troubleshooting
### 1. Common Issues and Solutions
#### Driver Installation Failures
```bash
# AKMODS troubleshooting commands
akmods-apt-daemon diagnose-nvidia
├── Check hardware detection
├── Verify kernel compatibility
├── Test driver installation
├── Validate module loading
└── Generate diagnostic report
akmods-apt-daemon fix-nvidia-driver
├── Remove problematic drivers
├── Clean build environment
├── Reinstall compatible driver
├── Update system configuration
└── Verify functionality
```
#### Performance Issues
```bash
# Performance troubleshooting
akmods-apt-daemon optimize-nvidia
├── Check current configuration
├── Apply performance optimizations
├── Test gaming performance
├── Validate stability
└── Save optimized configuration
```
### 2. Rollback and Recovery
#### Driver Rollback
```bash
# AKMODS driver rollback
akmods-apt-daemon rollback-nvidia
├── Identify previous working driver
├── Remove current driver
├── Install previous driver
├── Restore configuration
└── Verify system stability
```
## Conclusion
AKMODS provides a comprehensive solution for NVIDIA driver management in Particle-OS, addressing the limitations of Ubuntu/Debian repositories by:
1. **Automated Driver Preparation**: Converting official .run files and PPA packages into AKMODS modules
2. **nvidia-open Support**: Providing open source drivers for modern Turing+ hardware
3. **Intelligent Driver Selection**: Automatically choosing the best driver for detected hardware
4. **Performance Optimization**: Applying gaming and performance optimizations
5. **Atomic Operations**: Ensuring safe driver installation and rollback
6. **Live System Support**: Installing drivers without rebooting
This approach ensures that Particle-OS users have access to the latest NVIDIA drivers while maintaining system stability and providing the flexibility to choose between proprietary and open source drivers based on their hardware and preferences.

View file

@ -0,0 +1,397 @@
# AKMODS Kernel Compatibility Patch Management
## Overview
This document addresses the critical challenge of kernel compatibility patch management in AKMODS, explaining how we handle the "magic" component of automatically adapting proprietary NVIDIA drivers to new kernel versions.
## The Patch Management Challenge
### The Problem
- **NVIDIA Drivers**: Proprietary, closed-source kernel modules
- **Kernel Evolution**: Linux kernel APIs change frequently
- **Manual Effort**: Traditionally requires human developers to create compatibility patches
- **Maintenance Burden**: Continuous effort to keep drivers working with new kernels
### The AKMODS Solution
AKMODS implements a multi-layered approach to patch management, combining automated detection, community patches, and intelligent fallback mechanisms.
## Patch Sources and Management
### 1. **Community Patch Database**
#### Primary Patch Sources
```yaml
# AKMODS patch database structure
patch_sources:
rpm_fusion:
url: "https://github.com/rpmfusion/nvidia-driver"
description: "Fedora/RPM Fusion patches"
coverage: "Most kernel versions"
quality: "High"
arch_linux:
url: "https://github.com/archlinux/svntogit-packages"
description: "Arch Linux patches"
coverage: "Latest kernels"
quality: "High"
ubuntu_kernel:
url: "https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux"
description: "Ubuntu kernel patches"
coverage: "Ubuntu kernels"
quality: "High"
nvidia_community:
url: "https://github.com/NVIDIA/open-gpu-kernel-modules"
description: "NVIDIA community patches"
coverage: "Limited"
quality: "Variable"
particle_os:
url: "https://github.com/particle-os/akmods-patches"
description: "Particle-OS specific patches"
coverage: "Particle-OS kernels"
quality: "High"
```
#### Patch Database Management
```bash
# AKMODS patch database structure
/var/lib/akmods-apt/patches/
├── nvidia-driver-535/
│ ├── kernel-6.8/
│ │ ├── rpm-fusion/
│ │ │ ├── nvidia-535-kernel-6.8.patch
│ │ │ ├── drm-api-updates.patch
│ │ │ └── memory-management.patch
│ │ ├── arch-linux/
│ │ │ ├── nvidia-535-kernel-6.8.patch
│ │ │ └── pci-api-changes.patch
│ │ └── particle-os/
│ │ ├── nvidia-535-kernel-6.8.patch
│ │ └── ubuntu-specific.patch
│ ├── kernel-6.7/
│ │ └── ...
│ └── kernel-6.6/
│ └── ...
├── nvidia-driver-470/
│ └── ...
└── nvidia-open/
└── ...
```
### 2. **Automated Patch Detection and Application**
#### Patch Selection Algorithm
```python
# AKMODS patch selection algorithm
def select_patches(driver_version, kernel_version, hardware_info):
patches = []
# 1. Check Particle-OS specific patches first
particle_patches = get_particle_os_patches(driver_version, kernel_version)
if particle_patches:
patches.extend(particle_patches)
return patches
# 2. Check RPM Fusion patches (most comprehensive)
rpm_patches = get_rpm_fusion_patches(driver_version, kernel_version)
if rpm_patches:
patches.extend(rpm_patches)
return patches
# 3. Check Arch Linux patches (latest kernels)
arch_patches = get_arch_linux_patches(driver_version, kernel_version)
if arch_patches:
patches.extend(arch_patches)
return patches
# 4. Check Ubuntu kernel patches
ubuntu_patches = get_ubuntu_patches(driver_version, kernel_version)
if ubuntu_patches:
patches.extend(ubuntu_patches)
return patches
# 5. Fallback to community patches
community_patches = get_community_patches(driver_version, kernel_version)
if community_patches:
patches.extend(community_patches)
return patches
# 6. No patches available - mark for manual review
return mark_for_manual_review(driver_version, kernel_version)
```
#### Automated Patch Application
```bash
# AKMODS automated patch application
akmods-apt-daemon apply-kernel-patches nvidia-driver-535 6.8.0-25-generic
├── 1. Query patch database
│ ├── Check Particle-OS patches
│ ├── Check RPM Fusion patches
│ ├── Check Arch Linux patches
│ └── Check Ubuntu patches
├── 2. Select optimal patch set
│ ├── Rank patches by quality
│ ├── Check patch compatibility
│ ├── Verify patch integrity
│ └── Select best patch combination
├── 3. Apply patches
│ ├── Apply kernel compatibility patches
│ ├── Apply driver-specific patches
│ ├── Apply hardware-specific patches
│ └── Verify patch application
└── 4. Validate patched modules
├── Test compilation
├── Verify symbol resolution
├── Check module loading
└── Generate validation report
```
### 3. **Patch Generation and Maintenance**
#### Automated Patch Generation (Experimental)
```python
# AKMODS automated patch generation (experimental)
def generate_patch_automatically(driver_version, old_kernel, new_kernel):
"""
Experimental: Automated patch generation using semantic analysis
"""
# 1. Extract kernel API changes
api_changes = extract_kernel_api_changes(old_kernel, new_kernel)
# 2. Analyze driver source (if available)
driver_apis = analyze_driver_apis(driver_version)
# 3. Identify compatibility issues
compatibility_issues = identify_compatibility_issues(api_changes, driver_apis)
# 4. Generate patch candidates
patch_candidates = generate_patch_candidates(compatibility_issues)
# 5. Validate and refine patches
validated_patches = validate_patches(patch_candidates, driver_version, new_kernel)
return validated_patches
```
#### Community-Driven Patch Maintenance
```bash
# AKMODS community patch workflow
akmods-apt-daemon submit-patch nvidia-driver-535 kernel-6.8
├── 1. Create patch submission
│ ├── Describe kernel changes
│ ├── Document patch rationale
│ ├── Include test results
│ └── Add hardware compatibility info
├── 2. Submit to community review
│ ├── Submit to Particle-OS patch repository
│ ├── Notify community maintainers
│ ├── Request review and testing
│ └── Track review status
├── 3. Community validation
│ ├── Peer review process
│ ├── Automated testing
│ ├── Hardware compatibility testing
│ └── Integration testing
└── 4. Patch acceptance
├── Merge into patch database
├── Update patch metadata
├── Notify AKMODS systems
└── Deploy to production
```
### 4. **Fallback Mechanisms**
#### No Patch Available Scenario
```bash
# AKMODS fallback when no patches are available
akmods-apt-daemon handle-no-patch nvidia-driver-535 6.8.0-25-generic
├── 1. Detect no patch availability
│ ├── Query all patch sources
│ ├── Confirm no patches found
│ ├── Log the situation
│ └── Notify administrators
├── 2. Implement fallback strategy
│ ├── Try older compatible kernel
│ ├── Use alternative driver version
│ ├── Switch to nvidia-open (if supported)
│ └── Fall back to nouveau
├── 3. User notification
│ ├── Inform user of limitation
│ ├── Provide alternative options
│ ├── Offer downgrade options
│ └── Request community assistance
└── 4. Track for future resolution
├── Add to manual review queue
├── Monitor for patch availability
├── Notify when patches become available
└── Update patch database
```
#### Manual Patch Creation Process
```bash
# AKMODS manual patch creation workflow
akmods-apt-daemon create-manual-patch nvidia-driver-535 6.8.0-25-generic
├── 1. Set up development environment
│ ├── Create isolated build environment
│ ├── Install development tools
│ ├── Set up kernel source
│ └── Prepare driver source
├── 2. Analyze compatibility issues
│ ├── Identify compilation errors
│ ├── Map kernel API changes
│ ├── Document required changes
│ └── Plan patch strategy
├── 3. Create and test patches
│ ├── Create compatibility patches
│ ├── Test patch application
│ ├── Verify compilation
│ └── Test module loading
├── 4. Validate patches
│ ├── Test on multiple systems
│ ├── Verify hardware compatibility
│ ├── Check performance impact
│ └── Document patch details
└── 5. Submit for review
├── Submit to community
├── Request testing
├── Document patch
└── Update patch database
```
## Patch Quality Assurance
### 1. **Automated Testing**
```yaml
# AKMODS patch testing framework
patch_testing:
compilation_test:
- "Verify module compilation"
- "Check symbol resolution"
- "Validate module dependencies"
functionality_test:
- "Test module loading"
- "Verify basic GPU functionality"
- "Check display output"
- "Test CUDA functionality (if applicable)"
compatibility_test:
- "Test on multiple kernel versions"
- "Verify hardware compatibility"
- "Check performance impact"
- "Validate stability"
integration_test:
- "Test with Particle-OS atomic operations"
- "Verify live system installation"
- "Check rollback functionality"
- "Test system stability"
```
### 2. **Patch Validation Pipeline**
```bash
# AKMODS patch validation pipeline
akmods-apt-daemon validate-patch nvidia-535-kernel-6.8.patch
├── 1. Patch integrity check
│ ├── Verify patch format
│ ├── Check patch syntax
│ ├── Validate file paths
│ └── Check patch metadata
├── 2. Compilation testing
│ ├── Apply patch to driver source
│ ├── Compile kernel modules
│ ├── Check for compilation errors
│ └── Verify module generation
├── 3. Functionality testing
│ ├── Load compiled modules
│ ├── Test basic GPU functionality
│ ├── Verify display output
│ └── Check system stability
├── 4. Compatibility testing
│ ├── Test on target kernel version
│ ├── Verify hardware compatibility
│ ├── Check performance impact
│ └── Validate long-term stability
└── 5. Integration testing
├── Test with AKMODS pipeline
├── Verify package creation
├── Test installation process
└── Validate rollback functionality
```
## Patch Database Management
### 1. **Database Structure**
```sql
-- AKMODS patch database schema
CREATE TABLE patches (
id INTEGER PRIMARY KEY,
driver_version VARCHAR(20) NOT NULL,
kernel_version VARCHAR(20) NOT NULL,
patch_name VARCHAR(100) NOT NULL,
source VARCHAR(50) NOT NULL,
quality_score INTEGER,
success_rate FLOAT,
created_date TIMESTAMP,
updated_date TIMESTAMP,
status VARCHAR(20)
);
CREATE TABLE patch_files (
id INTEGER PRIMARY KEY,
patch_id INTEGER,
file_path VARCHAR(200),
file_content TEXT,
checksum VARCHAR(64),
FOREIGN KEY (patch_id) REFERENCES patches(id)
);
CREATE TABLE patch_tests (
id INTEGER PRIMARY KEY,
patch_id INTEGER,
test_type VARCHAR(50),
test_result VARCHAR(20),
test_date TIMESTAMP,
FOREIGN KEY (patch_id) REFERENCES patches(id)
);
```
### 2. **Patch Update Process**
```bash
# AKMODS patch database update
akmods-apt-daemon update-patch-database
├── 1. Monitor patch sources
│ ├── Check RPM Fusion updates
│ ├── Monitor Arch Linux patches
│ ├── Check Ubuntu kernel patches
│ └── Monitor community patches
├── 2. Download new patches
│ ├── Download patch files
│ ├── Verify patch integrity
│ ├── Extract patch metadata
│ └── Store in database
├── 3. Validate new patches
│ ├── Run automated tests
│ ├── Check compatibility
│ ├── Verify quality
│ └── Update patch status
└── 4. Deploy updates
├── Update patch database
├── Notify AKMODS systems
├── Update patch index
└── Log update activity
```
## Conclusion
The AKMODS patch management system addresses the critical challenge of kernel compatibility through a multi-layered approach:
1. **Community-Driven**: Leverages existing community patches from RPM Fusion, Arch Linux, and Ubuntu
2. **Automated Selection**: Intelligently selects the best available patches for each driver/kernel combination
3. **Quality Assurance**: Comprehensive testing and validation of all patches
4. **Fallback Mechanisms**: Graceful handling when no patches are available
5. **Continuous Improvement**: Community-driven patch creation and maintenance
This approach significantly reduces the maintenance burden while ensuring reliable driver compatibility across kernel versions. While it doesn't eliminate the need for human expertise entirely, it maximizes the reuse of existing community work and provides clear processes for handling edge cases.

View file

@ -0,0 +1,344 @@
# AKMODS Kernel Compatibility Patch Strategy
## Overview
This document addresses the critical challenge of kernel compatibility patching for AKMODS, which is the most significant operational bottleneck for systems relying on proprietary kernel modules. The strategy outlines multiple approaches for sourcing, maintaining, and applying kernel patches.
## The Challenge
### Why Patches Are Critical
- Proprietary drivers (NVIDIA, VirtualBox, etc.) are not updated as frequently as the Linux kernel
- Kernel API changes between versions can break existing drivers
- Manual patch development requires deep kernel expertise
- Patch maintenance is ongoing and labor-intensive
### Current State Analysis
- **NVIDIA Drivers**: Require patches for most kernel updates
- **VirtualBox**: Often needs patches for new kernel versions
- **Custom Hardware**: May need specialized patches
- **Community Patches**: Available but not always timely or compatible
## Strategy 1: Community-Driven Patch Sourcing
### Primary Sources
```bash
# Community patch sources
sources:
- ubuntu-kernel-team:
location: "https://git.launchpad.net/~ubuntu-kernel-team/ubuntu/+source/linux/+git"
reliability: "High"
coverage: "Ubuntu kernels"
- rpmfusion:
location: "https://github.com/rpmfusion/kernel-module-nvidia"
reliability: "High"
coverage: "Fedora kernels"
- arch-aur:
location: "https://aur.archlinux.org/packages/nvidia-dkms"
reliability: "Medium"
coverage: "Arch kernels"
- nvidia-forum:
location: "https://forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-agx/70"
reliability: "Variable"
coverage: "NVIDIA-specific"
```
### Patch Collection Process
```bash
# Automated patch collection
akmods-patch-collector:
├── Monitor community sources
├── Download and validate patches
├── Test patches against target kernels
├── Store in patch database
└── Notify build system of new patches
```
### Implementation Tasks
- [ ] Create patch monitoring system
- [ ] Implement patch validation framework
- [ ] Build patch compatibility testing
- [ ] Create patch database management
- [ ] Develop patch quality assessment
## Strategy 2: Automated Patch Generation (Experimental)
### AI-Assisted Patch Creation
```python
# Automated patch generation approach
class AutomatedPatchGenerator:
def __init__(self):
self.kernel_analyzer = KernelAPIAnalyzer()
self.driver_analyzer = DriverSourceAnalyzer()
self.patch_generator = AIPatchGenerator()
def generate_patch(self, driver_source, old_kernel, new_kernel):
# Analyze kernel API changes
api_changes = self.kernel_analyzer.detect_changes(old_kernel, new_kernel)
# Analyze driver source code
driver_analysis = self.driver_analyzer.analyze(driver_source)
# Generate potential patches
patches = self.patch_generator.generate(api_changes, driver_analysis)
# Validate and test patches
return self.validate_patches(patches)
```
### Machine Learning Approach
```yaml
# ML-based patch generation
ml_patch_generation:
training_data:
- historical_patches: "Database of successful patches"
- kernel_changes: "Kernel API change history"
- driver_adaptations: "How drivers adapted to changes"
model_training:
- input: "Kernel API changes + driver source"
- output: "Generated patch candidates"
- validation: "Automated testing against target kernel"
confidence_scoring:
- high_confidence: "Apply automatically"
- medium_confidence: "Human review required"
- low_confidence: "Manual patch development needed"
```
### Implementation Tasks
- [ ] Develop kernel API change detection
- [ ] Create driver source analysis tools
- [ ] Build ML model for patch generation
- [ ] Implement automated patch validation
- [ ] Create confidence scoring system
## Strategy 3: Dedicated Patch Development Team
### Team Structure
```bash
# Patch development team
patch_team:
roles:
- kernel_developers: "Deep kernel expertise"
- driver_specialists: "Driver-specific knowledge"
- automation_engineers: "Build and test automation"
- community_liaisons: "External community coordination"
responsibilities:
- monitor_kernel_updates: "Track kernel development"
- develop_patches: "Create compatibility patches"
- test_patches: "Validate patch effectiveness"
- maintain_patch_database: "Organize and version patches"
- coordinate_with_communities: "Share and receive patches"
```
### Development Workflow
```bash
# Patch development workflow
workflow:
1. kernel_release:
- Monitor kernel release candidates
- Identify potential API changes
- Assess impact on supported drivers
2. patch_development:
- Create patches for affected drivers
- Test patches against multiple kernel versions
- Validate patch compatibility
3. patch_deployment:
- Add patches to AKMODS build system
- Update patch database
- Notify build daemon of new patches
4. patch_maintenance:
- Monitor patch effectiveness
- Update patches as needed
- Remove obsolete patches
```
### Implementation Tasks
- [ ] Recruit kernel development team
- [ ] Set up development infrastructure
- [ ] Create patch development workflow
- [ ] Implement patch testing framework
- [ ] Establish community coordination
## Strategy 4: Hybrid Approach (Recommended)
### Combined Strategy
```yaml
# Hybrid patch management
hybrid_approach:
primary: "Community-driven sourcing"
- Monitor multiple community sources
- Automatically collect and validate patches
- Use community patches as primary source
secondary: "Automated generation"
- Use ML/AI for patch generation
- Apply high-confidence patches automatically
- Require human review for medium/low confidence
tertiary: "Dedicated team"
- Maintain team for critical patches
- Handle edge cases and complex scenarios
- Coordinate with external communities
fallback: "Manual development"
- Develop patches manually when needed
- Handle proprietary or specialized drivers
- Maintain patches for custom hardware
```
### Implementation Priority
```bash
# Implementation phases
phase_1: "Community-driven sourcing"
- Set up patch monitoring system
- Implement patch collection and validation
- Create patch database management
phase_2: "Dedicated team development"
- Recruit kernel development team
- Set up development infrastructure
- Create patch development workflow
phase_3: "Automated generation (experimental)"
- Develop ML-based patch generation
- Implement automated validation
- Integrate with existing systems
```
## Patch Database Management
### Database Structure
```sql
-- Patch database schema
CREATE TABLE patches (
id SERIAL PRIMARY KEY,
driver_name VARCHAR(100) NOT NULL,
kernel_version VARCHAR(50) NOT NULL,
patch_file TEXT NOT NULL,
source VARCHAR(100) NOT NULL,
confidence_score DECIMAL(3,2),
test_status VARCHAR(20),
created_date TIMESTAMP,
updated_date TIMESTAMP
);
CREATE TABLE patch_sources (
id SERIAL PRIMARY KEY,
source_name VARCHAR(100) NOT NULL,
source_url VARCHAR(255),
reliability_score DECIMAL(3,2),
last_updated TIMESTAMP
);
```
### Patch Lifecycle Management
```bash
# Patch lifecycle
lifecycle:
collection:
- Monitor sources for new patches
- Download and validate patches
- Store in database with metadata
testing:
- Test patches against target kernels
- Validate patch effectiveness
- Update confidence scores
deployment:
- Deploy patches to build system
- Monitor patch performance
- Update patch status
maintenance:
- Monitor for patch failures
- Update patches as needed
- Remove obsolete patches
```
## Quality Assurance
### Patch Validation Framework
```bash
# Patch validation process
validation:
automated_tests:
- kernel_compilation: "Ensure kernel compiles with patch"
- module_compilation: "Ensure module compiles with patch"
- module_loading: "Test module loading and unloading"
- functionality_tests: "Test module functionality"
manual_review:
- code_quality: "Review patch code quality"
- security_analysis: "Check for security implications"
- compatibility_verification: "Verify kernel compatibility"
integration_tests:
- system_boot: "Test system boot with patched modules"
- performance_tests: "Measure performance impact"
- stability_tests: "Long-term stability testing"
```
### Success Metrics
```yaml
# Patch management metrics
metrics:
patch_availability:
target: ">95% of kernel updates have patches within 48 hours"
measurement: "Time from kernel release to patch availability"
patch_effectiveness:
target: ">98% of patches work without issues"
measurement: "Success rate of applied patches"
automation_level:
target: ">80% of patches sourced automatically"
measurement: "Percentage of patches from automated sources"
community_contribution:
target: ">50% of patches from community sources"
measurement: "Percentage of community-sourced patches"
```
## Risk Mitigation
### Risks and Mitigations
```yaml
risks:
community_dependency:
risk: "Over-reliance on community patches"
mitigation: "Maintain dedicated team for critical patches"
patch_quality:
risk: "Poor quality patches causing system instability"
mitigation: "Comprehensive testing and validation framework"
automation_failure:
risk: "Automated systems failing to generate patches"
mitigation: "Multiple fallback strategies and manual processes"
resource_constraints:
risk: "Insufficient resources for patch development"
mitigation: "Prioritize critical drivers and community collaboration"
```
## Conclusion
The kernel compatibility patch challenge is indeed the most significant operational bottleneck for AKMODS. The hybrid approach combining community-driven sourcing, dedicated team development, and experimental automated generation provides the best balance of reliability, efficiency, and sustainability.
Key success factors:
1. **Community Collaboration**: Leverage existing community efforts
2. **Dedicated Resources**: Maintain team for critical patches
3. **Automation Investment**: Develop automated tools to reduce manual effort
4. **Quality Focus**: Comprehensive testing and validation
5. **Continuous Improvement**: Learn from patch successes and failures
This strategy ensures that Particle-OS can maintain a robust AKMODS system while managing the significant ongoing effort required for kernel compatibility patches.

View file

@ -0,0 +1,366 @@
# AKMODS Performance Optimization Strategy
## Overview
This document addresses the performance challenges and optimization strategies for AKMODS, particularly focusing on build time targets, resource utilization, and scalability. The goal is to achieve the ambitious target of <30 minutes per module while maintaining quality and reliability.
## Performance Targets and Realities
### Current Performance Targets
```yaml
# Performance targets
targets:
build_time:
simple_modules: "<10 minutes"
complex_modules: "<30 minutes"
nvidia_drivers: "<45 minutes"
resource_utilization:
cpu_usage: "80-90% during compilation"
memory_usage: "<8GB per build container"
disk_io: "Optimized for parallel builds"
scalability:
parallel_builds: "Up to 10 concurrent builds"
build_queue: "Process 100+ builds per day"
registry_throughput: "1000+ package downloads per hour"
```
### Realistic Performance Expectations
```bash
# Performance reality check
performance_realities:
nvidia_drivers:
- driver_535: "25-45 minutes (depending on kernel version)"
- driver_470: "20-35 minutes (legacy, smaller codebase)"
- nvidia_open: "15-25 minutes (open source, optimized)"
common_modules:
- v4l2loopback: "5-10 minutes"
- gpd_fan_kmod: "3-8 minutes"
- virtualbox: "15-25 minutes"
build_overhead:
- container_startup: "30-60 seconds"
- dependency_installation: "2-5 minutes"
- patch_application: "1-3 minutes"
- package_creation: "1-2 minutes"
```
## Build Performance Optimization
### Container Optimization
#### Optimized Build Container
```dockerfile
# Optimized AKMODS build container
FROM ubuntu:24.04
# Use multi-stage build for smaller images
FROM ubuntu:24.04 as base
RUN apt-get update && apt-get install -y \
build-essential \
linux-headers-generic \
dkms \
git \
wget \
debhelper \
dh-dkms \
kernel-package \
fakeroot \
devscripts \
ccache \
distcc \
&& rm -rf /var/lib/apt/lists/*
# Optimize build environment
ENV CCACHE_DIR=/build/.ccache
ENV CCACHE_MAXSIZE=10G
ENV CCACHE_COMPRESS=1
ENV MAKEFLAGS="-j$(nproc)"
ENV DEB_BUILD_OPTIONS="parallel=$(nproc)"
# Pre-install common dependencies
RUN apt-get update && apt-get install -y \
libssl-dev \
libelf-dev \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
```
#### Build Environment Optimization
```bash
# Build environment optimizations
build_optimizations:
ccache_integration:
- persistent_cache: "Cache compiled objects across builds"
- cache_compression: "Compress cache to save disk space"
- cache_cleanup: "Automatic cache cleanup policies"
parallel_compilation:
- make_jobs: "Use all available CPU cores"
- distcc_integration: "Distributed compilation across build farm"
- incremental_builds: "Reuse previous build artifacts"
dependency_optimization:
- pre_installed_deps: "Pre-install common dependencies"
- dependency_caching: "Cache dependency downloads"
- minimal_installs: "Install only required packages"
```
### Kernel-Specific Optimizations
#### Kernel Header Management
```bash
# Kernel header optimization
kernel_optimizations:
header_caching:
- pre_downloaded_headers: "Pre-download kernel headers"
- header_sharing: "Share headers across builds"
- version_specific_cache: "Cache headers by kernel version"
compilation_flags:
- optimize_flags: "-O2 -march=native -mtune=native"
- debug_flags: "Minimal debug information"
- size_optimization: "Optimize for module size"
parallel_module_builds:
- concurrent_modules: "Build multiple modules simultaneously"
- dependency_resolution: "Smart dependency ordering"
- resource_allocation: "Dynamic resource allocation"
```
### NVIDIA Driver Specific Optimizations
#### NVIDIA Build Optimizations
```bash
# NVIDIA-specific optimizations
nvidia_optimizations:
source_optimization:
- selective_compilation: "Compile only required components"
- module_filtering: "Build only needed kernel modules"
- debug_disabled: "Disable debug builds for production"
compilation_strategies:
- parallel_compilation: "Compile modules in parallel"
- incremental_builds: "Reuse previous compilation artifacts"
- optimized_flags: "Use NVIDIA-specific optimization flags"
resource_management:
- memory_optimization: "Optimize memory usage during compilation"
- cpu_affinity: "Bind compilation to specific CPU cores"
- io_optimization: "Optimize disk I/O patterns"
```
## Infrastructure Optimization
### Build Farm Architecture
#### Scalable Build Infrastructure
```yaml
# Build farm architecture
build_farm:
build_nodes:
- high_performance:
cpu: "32+ cores"
memory: "64GB+ RAM"
storage: "NVMe SSDs"
purpose: "Complex modules (NVIDIA, VirtualBox)"
- standard_performance:
cpu: "16 cores"
memory: "32GB RAM"
storage: "SSDs"
purpose: "Common modules"
- rapid_build:
cpu: "8 cores"
memory: "16GB RAM"
storage: "SSDs"
purpose: "Simple modules"
load_balancing:
- intelligent_routing: "Route builds based on complexity"
- resource_monitoring: "Monitor node resource usage"
- dynamic_scaling: "Scale nodes based on demand"
```
### Registry Performance
#### OCI Registry Optimization
```bash
# Registry performance optimization
registry_optimization:
storage_optimization:
- cdn_integration: "Use CDN for global distribution"
- compression: "Compress packages for faster downloads"
- caching_layers: "Cache frequently accessed packages"
download_optimization:
- parallel_downloads: "Download package parts in parallel"
- resume_capability: "Resume interrupted downloads"
- local_caching: "Cache packages locally"
index_optimization:
- efficient_indexing: "Optimize package index queries"
- metadata_caching: "Cache package metadata"
- search_optimization: "Optimize package search"
```
## Monitoring and Optimization
### Performance Monitoring
#### Build Performance Metrics
```bash
# Performance monitoring
performance_monitoring:
build_metrics:
- build_duration: "Track build time by module type"
- resource_usage: "Monitor CPU, memory, disk usage"
- success_rate: "Track build success/failure rates"
- queue_length: "Monitor build queue length"
optimization_metrics:
- cache_hit_rate: "Track ccache hit rates"
- parallel_efficiency: "Measure parallel build efficiency"
- resource_utilization: "Monitor resource utilization"
user_experience:
- download_speed: "Monitor package download speeds"
- installation_time: "Track module installation time"
- system_impact: "Measure impact on system performance"
```
### Continuous Optimization
#### Optimization Strategies
```yaml
# Continuous optimization
optimization_strategies:
build_optimization:
- profile_builds: "Profile build processes for bottlenecks"
- optimize_compilation: "Optimize compilation flags and processes"
- cache_optimization: "Continuously optimize caching strategies"
infrastructure_optimization:
- hardware_upgrades: "Upgrade hardware based on performance data"
- software_optimization: "Optimize software stack and configurations"
- network_optimization: "Optimize network infrastructure"
process_optimization:
- workflow_improvement: "Improve build workflows and processes"
- automation_enhancement: "Enhance automation and reduce manual steps"
- quality_optimization: "Optimize quality assurance processes"
```
## Realistic Performance Expectations
### Module-Specific Performance
#### Performance by Module Type
```bash
# Realistic performance expectations
performance_expectations:
simple_modules:
- v4l2loopback: "3-8 minutes"
- gpd_fan_kmod: "2-5 minutes"
- nct6687d: "3-6 minutes"
- factors: "Small codebase, few dependencies"
medium_modules:
- virtualbox: "12-20 minutes"
- broadcom_wl: "8-15 minutes"
- rtl8821ce: "6-12 minutes"
- factors: "Medium codebase, moderate dependencies"
complex_modules:
- nvidia_driver_535: "25-45 minutes"
- nvidia_driver_470: "20-35 minutes"
- nvidia_open: "15-25 minutes"
- factors: "Large codebase, many dependencies, complex compilation"
```
### Scalability Considerations
#### Scaling Strategies
```yaml
# Scaling considerations
scaling_strategies:
horizontal_scaling:
- build_farm_expansion: "Add more build nodes"
- load_distribution: "Distribute load across nodes"
- geographic_distribution: "Distribute builds geographically"
vertical_scaling:
- hardware_upgrades: "Upgrade individual node hardware"
- resource_optimization: "Optimize resource usage per node"
- software_optimization: "Optimize software stack"
hybrid_scaling:
- dynamic_scaling: "Scale based on demand"
- resource_pooling: "Pool resources across nodes"
- intelligent_routing: "Route builds to optimal nodes"
```
## Implementation Timeline
### Phase 1: Basic Optimization (Weeks 1-4)
- [ ] Implement ccache integration
- [ ] Optimize build containers
- [ ] Set up basic performance monitoring
- [ ] Implement parallel compilation
### Phase 2: Advanced Optimization (Weeks 5-8)
- [ ] Deploy build farm architecture
- [ ] Implement distributed compilation
- [ ] Optimize registry performance
- [ ] Enhance monitoring and alerting
### Phase 3: Continuous Optimization (Weeks 9-12)
- [ ] Implement automated optimization
- [ ] Deploy advanced caching strategies
- [ ] Optimize for specific module types
- [ ] Establish performance governance
## Success Metrics
### Performance Metrics
```yaml
# Performance success metrics
success_metrics:
build_performance:
- target: "90% of builds complete within target time"
- measurement: "Build completion time tracking"
- threshold: "Simple: <10min, Complex: <30min, NVIDIA: <45min"
resource_efficiency:
- target: "80%+ resource utilization during builds"
- measurement: "CPU, memory, disk utilization"
- threshold: "Efficient resource usage without waste"
user_experience:
- target: "Package downloads complete in <2 minutes"
- measurement: "Download speed and completion time"
- threshold: "Fast, reliable package distribution"
scalability:
- target: "Support 100+ concurrent builds"
- measurement: "Build farm capacity and throughput"
- threshold: "Scalable infrastructure for growth"
```
## Conclusion
Performance optimization is critical for AKMODS success. The strategies outlined in this document provide a comprehensive approach to achieving the ambitious performance targets while maintaining quality and reliability.
Key success factors:
1. **Realistic Expectations**: Acknowledge that complex modules like NVIDIA drivers will take longer
2. **Infrastructure Investment**: Deploy appropriate hardware and software infrastructure
3. **Continuous Optimization**: Implement ongoing performance monitoring and optimization
4. **Scalable Architecture**: Design for growth and increased demand
5. **User Experience Focus**: Optimize for end-user experience, not just build performance
The target of <30 minutes for most modules is achievable with proper optimization, while complex modules may require more time. The key is providing users with fast, reliable access to pre-built packages while maintaining the flexibility of DKMS fallback for edge cases.

795
src/akmods-apt/plan.md Normal file
View file

@ -0,0 +1,795 @@
# Particle-OS AKMODS Implementation Plan
## Overview
This document provides a comprehensive implementation plan for AKMODS (Automated Kernel MOdules) in Particle-OS, covering all phases from initial development to production deployment and maintenance. AKMODS will provide pre-built kernel module packages distributed via OCI registry, with DKMS as a fallback for unsupported cases.
## Project Goals
### Primary Objectives
1. **Pre-built Package Distribution**: Create and distribute pre-compiled kernel module packages via OCI registry
2. **Atomic Operations**: Full integration with Particle-OS atomic transactions
3. **Hybrid Architecture**: AKMODS pre-built packages as primary, DKMS as fallback
4. **Automated Builds**: CI/CD pipeline for kernel module compilation
5. **Live System Support**: Install modules without rebooting
### Success Criteria
- [ ] AKMODS pre-built packages can be installed and removed atomically
- [ ] Pre-built modules are distributed via OCI registry
- [ ] Automated builds trigger on kernel updates
- [ ] Failed installations can be rolled back safely
- [ ] DKMS fallback works for unsupported modules
- [ ] Live system installation works without rebooting
## Phase 1: Core Infrastructure Development (Weeks 1-4)
### 1.1 AKMODS Package Format Design
#### DEB Package Structure
```bash
# Target package format
kmod-nvidia-535-5.15.0-56-generic_1.0_amd64.deb
├── control.tar.gz
│ ├── control # Package metadata and dependencies
│ ├── postinst # Post-installation script
│ ├── prerm # Pre-removal script
│ └── triggers # Package triggers
├── data.tar.gz
│ └── lib/modules/5.15.0-56-generic/extra/
│ ├── nvidia.ko # Pre-compiled kernel module
│ ├── nvidia-modeset.ko
│ └── nvidia-drm.ko
└── debian-binary
```
#### Implementation Tasks
- [ ] Design DEB package metadata structure
- [ ] Create post-installation scripts for module loading
- [ ] Implement package triggers for kernel updates
- [ ] Define module dependency resolution
- [ ] Create package signing infrastructure
### 1.2 AKMODS Source Package System
#### Source Package Structure
```bash
akmod-nvidia-source/
├── debian/
│ ├── control # Build dependencies and package metadata
│ ├── rules # Build instructions for kernel modules
│ ├── akmods.conf # AKMODS-specific configuration
│ └── patches/ # Kernel-specific patches
├── src/
│ └── nvidia-driver/ # Kernel module source code
├── patches/
│ ├── kernel-5.15.patch
│ └── kernel-5.16.patch
└── akmods.yaml # Module configuration
```
#### Implementation Tasks
- [ ] Create AKMODS source package template
- [ ] Implement build dependency resolution
- [ ] Design kernel-specific patch system
- [ ] Create module configuration format
- [ ] Implement source package validation
### 1.3 AKMODS Build Daemon
#### Daemon Architecture
```bash
/usr/local/bin/akmods-apt-daemon
├── Core Components
│ ├── Build Manager # Orchestrates build processes
│ ├── Package Manager # Handles DEB package creation
│ ├── Registry Client # OCI registry integration
│ └── Event Monitor # Kernel update detection
├── Configuration
│ ├── /etc/akmods-apt/akmods.conf
│ └── /var/lib/akmods-apt/config/
└── Data Storage
├── /var/lib/akmods-apt/build-environments/
├── /var/lib/akmods-apt/cache/
└── /var/lib/akmods-apt/logs/
```
#### Implementation Tasks
- [ ] Design daemon architecture and interfaces
- [ ] Implement kernel update detection
- [ ] Create build queue management
- [ ] Implement package creation pipeline
- [ ] Add logging and monitoring capabilities
## Phase 2: Containerized Build System (Weeks 5-8)
### 2.1 AKMODS Build Containers
#### Base Build Container
```dockerfile
# Base AKMODS build container
FROM ubuntu:24.04
# Install build dependencies
RUN apt-get update && apt-get install -y \
build-essential \
linux-headers-generic \
dkms \
git \
wget \
debhelper \
dh-dkms \
kernel-package \
fakeroot \
devscripts
# Set up AKMODS environment
ENV AKMODS_AUTOINSTALL=yes
ENV AKMODS_BUILD_TIMEOUT=3600
ENV AKMODS_BUILD_JOBS=4
WORKDIR /build
```
#### Implementation Tasks
- [ ] Create base build container
- [ ] Implement kernel header management
- [ ] Add build dependency resolution
- [ ] Create build environment isolation
- [ ] Implement build timeout handling
### 2.2 Build Orchestration System
#### Build Process Flow
```bash
# AKMODS build orchestration
akmods-apt-daemon build nvidia-driver-535 5.15.0-56-generic
├── 1. Environment Setup
│ ├── Create build container
│ ├── Mount source code
│ ├── Install kernel headers
│ └── Set up build environment
├── 2. Module Compilation
│ ├── Apply kernel patches
│ ├── Compile kernel modules using DKMS
│ ├── Sign modules (if required)
│ └── Generate module metadata
├── 3. Package Creation
│ ├── Create DEB package structure
│ ├── Add post-installation scripts
│ ├── Generate package manifest
│ └── Sign package
└── 4. Distribution
├── Upload to OCI registry
├── Update package index
└── Notify completion
```
#### Implementation Tasks
- [ ] Implement build container orchestration
- [ ] Create module compilation pipeline using DKMS
- [ ] Add kernel module signing support
- [ ] Implement DEB package creation
- [ ] Add build failure handling and retry logic
### 2.3 Parallel Build Management
#### Build Queue System
```bash
# Build queue management
/var/lib/akmods-apt/queue/
├── pending/ # Pending builds
├── running/ # Currently running builds
├── completed/ # Completed builds
├── failed/ # Failed builds
└── retry/ # Builds to retry
```
#### Implementation Tasks
- [ ] Design build queue system
- [ ] Implement parallel build management
- [ ] Add build priority handling
- [ ] Create build resource allocation
- [ ] Implement build failure recovery
## Phase 3: OCI Registry Integration (Weeks 9-12)
### 3.1 OCI Registry Structure
#### Registry Organization
```bash
registry.particle-os.org/akmods/
├── nvidia-driver-535/
│ ├── 5.15.0-56-generic/
│ │ ├── kmod-nvidia-535-5.15.0-56-generic_1.0_amd64.deb
│ │ ├── kmod-nvidia-535-5.15.0-56-generic_1.0_amd64.deb.sha256
│ │ └── metadata.json
│ └── 5.15.0-55-generic/
│ ├── kmod-nvidia-535-5.15.0-55-generic_1.0_amd64.deb
│ └── metadata.json
├── virtualbox-dkms/
│ └── 6.1.38/
│ └── kmod-virtualbox-6.1.38-5.15.0-56-generic_1.0_amd64.deb
└── index/
├── modules.json # Module index
├── kernels.json # Kernel index
└── packages.json # Package index
```
#### Implementation Tasks
- [ ] Design OCI registry structure
- [ ] Implement package upload system
- [ ] Create package index management
- [ ] Add package verification and signing
- [ ] Implement registry access control
### 3.2 Package Distribution System
#### Distribution Workflow
```bash
# Package distribution workflow
1. Build Completion
├── Build daemon completes module compilation
├── Creates DEB package
├── Signs package
└── Triggers upload
2. Registry Upload
├── Upload package to OCI registry
├── Update package index
├── Generate package metadata
└── Notify distribution system
3. System Installation
├── System detects new kernel
├── Queries registry for modules
├── Downloads pre-built packages
└── Installs via dpkg
```
#### Implementation Tasks
- [ ] Implement package upload to registry
- [ ] Create package index management
- [ ] Add package download system
- [ ] Implement package verification
- [ ] Create distribution notification system
### 3.3 Registry Client Integration
#### Client Architecture
```bash
# Registry client components
/usr/local/bin/akmods-apt-client
├── Registry Client
│ ├── Package Discovery # Find available packages
│ ├── Package Download # Download packages
│ ├── Package Verification # Verify package integrity
│ └── Cache Management # Manage local cache
├── Package Manager
│ ├── Installation # Install packages
│ ├── Removal # Remove packages
│ ├── Dependency Resolution # Resolve dependencies
│ └── Rollback # Rollback failed installations
└── Integration
├── apt-layer Integration # Integrate with apt-layer
├── Live System Support # Live system installation
└── Atomic Operations # Atomic transaction support
```
#### Implementation Tasks
- [ ] Create registry client
- [ ] Implement package discovery
- [ ] Add package download and verification
- [ ] Create cache management system
- [ ] Integrate with apt-layer atomic operations
## Phase 4: apt-layer Integration (Weeks 13-16)
### 4.1 AKMODS Commands
#### New apt-layer Commands
```bash
# AKMODS-specific commands
--akmods-status # Show AKMODS module status
--akmods-list # List installed AKMODS modules
--akmods-install <module> # Install AKMODS module
--akmods-remove <module> # Remove AKMODS module
--akmods-rebuild <module> # Rebuild AKMODS module
--akmods-rebuild-all # Rebuild all AKMODS modules
--akmods-clean <module> # Clean AKMODS build environment
--akmods-logs <module> # Show AKMODS build logs
--akmods-rollback # Rollback failed AKMODS installation
--akmods-migrate-dkms # Migrate from DKMS to AKMODS
--akmods-fallback-dkms # Enable DKMS fallback for module
```
#### Implementation Tasks
- [ ] Add AKMODS commands to apt-layer.sh
- [ ] Implement command parsing and validation
- [ ] Create AKMODS operation functions
- [ ] Add error handling and rollback
- [ ] Integrate with existing apt-layer architecture
### 4.2 Atomic AKMODS Operations
#### Transaction Integration
```bash
# Atomic AKMODS operations
apt-layer --begin-transaction "akmods-install-nvidia"
├── Validate AKMODS module availability
├── Check system compatibility
├── Reserve disk space
└── Create transaction manifest
apt-layer --akmods-install nvidia-driver-535
├── Download AKMODS package from registry
├── Verify package integrity
├── Install package via dpkg
├── Update module database
└── Configure module loading
apt-layer --commit-transaction "akmods-install-nvidia"
├── Update system configuration
├── Update bootloader if needed
├── Clean up temporary files
└── Log successful installation
```
#### Implementation Tasks
- [ ] Integrate AKMODS with atomic transactions
- [ ] Implement transaction validation
- [ ] Add rollback mechanisms
- [ ] Create transaction logging
- [ ] Implement cleanup procedures
### 4.3 Live System AKMODS Support
#### Live Installation Process
```bash
# Live AKMODS installation
apt-layer --live-install kmod-nvidia-535
├── 1. Package Download
│ ├── Query registry for package
│ ├── Download package to live overlay
│ └── Verify package integrity
├── 2. Live Installation
│ ├── Install package in live overlay
│ ├── Load kernel modules
│ ├── Update module database
│ └── Configure module persistence
└── 3. Commit Changes
├── Commit live changes as new layer
├── Update system configuration
└── Clean up temporary files
```
#### Implementation Tasks
- [ ] Integrate AKMODS with live overlay system
- [ ] Implement live package installation
- [ ] Add module loading in live environment
- [ ] Create live change commit process
- [ ] Implement live rollback mechanisms
## Phase 5: Module Categories and Variants (Weeks 17-20)
### 5.1 Module Category Implementation
#### Common Modules
```yaml
common:
- v4l2loopback:
description: "Virtual video devices"
source: "https://github.com/umlaeute/v4l2loopback"
patches: ["kernel-5.15.patch"]
- gpd-fan-kmod:
description: "GPD Win Max fan control"
source: "https://github.com/redhat-arch/gpd-fan-kmod"
dependencies: ["i2c-dev"]
- nct6687d:
description: "AMD B550 chipset support"
source: "https://github.com/redhat-arch/nct6687d"
patches: ["amd-b550.patch"]
```
#### Implementation Tasks
- [ ] Define module category structure
- [ ] Create module source management
- [ ] Implement patch system
- [ ] Add dependency resolution
- [ ] Create module validation
### 5.2 NVIDIA Module Support
#### NVIDIA Driver Variants
```yaml
nvidia:
- nvidia-driver-535:
description: "Closed proprietary drivers (RTX 40/30 series)"
source: "nvidia-driver-535"
supported_gpus: ["RTX 40", "RTX 30", "GTX 16"]
kernel_versions: ["5.15", "5.16", "5.17"]
- nvidia-driver-470:
description: "Legacy driver support (GTX 10/900 series)"
source: "nvidia-driver-470"
supported_gpus: ["GTX 10", "GTX 900", "GTX 700"]
kernel_versions: ["5.15", "5.16"]
- nvidia-open:
description: "Open source drivers"
source: "nvidia-open"
supported_gpus: ["RTX 50", "RTX 40", "RTX 30"]
kernel_versions: ["5.15", "5.16", "5.17"]
```
#### Implementation Tasks
- [ ] Create NVIDIA driver variants
- [ ] Implement GPU detection
- [ ] Add driver compatibility checking
- [ ] Create NVIDIA Prime integration
- [ ] Implement driver switching
### 5.3 Gaming Variant Creation
#### Particle-OS Bazzite Gaming (AKMODS)
```json
{
"name": "particle-os-bazzite-gaming-akmods",
"base": "ubuntu-base/25.04",
"description": "Gaming-focused variant with AKMODS NVIDIA support",
"akmods_modules": [
"nvidia-driver-535",
"v4l2loopback",
"gpd-fan-kmod"
],
"packages": [
"steam",
"wine",
"lutris",
"gamemode",
"mangohud",
"proton-ge-custom"
],
"repositories": [
"nvidia",
"steam",
"lutris"
],
"configurations": [
"gaming-performance",
"nvidia-prime",
"steam-integration"
]
}
```
#### Implementation Tasks
- [ ] Create gaming variant definitions
- [ ] Implement variant building system
- [ ] Add gaming optimizations
- [ ] Create Steam integration
- [ ] Implement performance tuning
## Phase 6: Testing and Validation (Weeks 21-24)
### 6.1 Automated Testing Framework
#### Test Categories
```bash
# AKMODS testing framework
tests/
├── unit/
│ ├── package-creation.test
│ ├── build-process.test
│ └── registry-client.test
├── integration/
│ ├── akmods-install.test
│ ├── akmods-remove.test
│ └── akmods-rebuild.test
├── system/
│ ├── live-installation.test
│ ├── atomic-operations.test
│ └── rollback-recovery.test
└── performance/
├── build-performance.test
├── installation-speed.test
└── memory-usage.test
```
#### Implementation Tasks
- [ ] Create unit test framework
- [ ] Implement integration tests
- [ ] Add system-level tests
- [ ] Create performance benchmarks
- [ ] Implement automated test execution
### 6.2 CI/CD Pipeline
#### Pipeline Stages
```yaml
# AKMODS CI/CD pipeline
stages:
- build:
- Build AKMODS packages
- Run unit tests
- Create build artifacts
- test:
- Run integration tests
- Execute system tests
- Performance benchmarking
- package:
- Create DEB packages
- Sign packages
- Upload to registry
- deploy:
- Deploy to test environment
- Run acceptance tests
- Deploy to production
```
#### Implementation Tasks
- [ ] Set up CI/CD infrastructure
- [ ] Create automated build pipeline
- [ ] Implement test automation
- [ ] Add deployment automation
- [ ] Create monitoring and alerting
### 6.3 Validation and Quality Assurance
#### Quality Gates
```bash
# Quality assurance checklist
- [ ] All unit tests pass
- [ ] Integration tests successful
- [ ] System tests completed
- [ ] Performance benchmarks met
- [ ] Security scan passed
- [ ] Documentation updated
- [ ] Code review completed
```
#### Implementation Tasks
- [ ] Define quality gates
- [ ] Implement automated validation
- [ ] Create security scanning
- [ ] Add performance monitoring
- [ ] Implement quality reporting
## Phase 7: Production Deployment (Weeks 25-28)
### 7.1 Production Infrastructure
#### Infrastructure Components
```bash
# Production infrastructure
infrastructure/
├── registry/
│ ├── registry.particle-os.org
│ ├── Load balancer
│ └── Storage backend
├── build-farm/
│ ├── Build servers
│ ├── Build containers
│ └── Build storage
├── monitoring/
│ ├── Build monitoring
│ ├── Registry monitoring
│ └── System monitoring
└── backup/
├── Registry backup
├── Build backup
└── Configuration backup
```
#### Implementation Tasks
- [ ] Set up production registry
- [ ] Deploy build infrastructure
- [ ] Implement monitoring systems
- [ ] Create backup procedures
- [ ] Set up disaster recovery
### 7.2 Documentation and Training
#### Documentation Structure
```bash
# Documentation structure
docs/
├── user-guide/
│ ├── installation.md
│ ├── usage.md
│ └── troubleshooting.md
├── developer-guide/
│ ├── architecture.md
│ ├── development.md
│ └── contribution.md
├── admin-guide/
│ ├── deployment.md
│ ├── maintenance.md
│ └── monitoring.md
└── api-reference/
├── commands.md
├── configuration.md
└── integration.md
```
#### Implementation Tasks
- [ ] Create user documentation
- [ ] Write developer guides
- [ ] Create admin documentation
- [ ] Write API reference
- [ ] Create training materials
### 7.3 Community and Support
#### Support Infrastructure
```bash
# Support infrastructure
support/
├── issue-tracking/
│ ├── Bug reports
│ ├── Feature requests
│ └── Support tickets
├── community/
│ ├── Forums
│ ├── Chat channels
│ └── Mailing lists
└── resources/
├── FAQ
├── Knowledge base
└── Video tutorials
```
#### Implementation Tasks
- [ ] Set up issue tracking
- [ ] Create community channels
- [ ] Build knowledge base
- [ ] Create support procedures
- [ ] Establish community guidelines
## Timeline and Milestones
### Week 1-4: Core Infrastructure
- [ ] AKMODS package format design
- [ ] Source package system implementation
- [ ] Build daemon architecture
- [ ] Basic build process
### Week 5-8: Containerized Builds
- [ ] Build container implementation
- [ ] Build orchestration system
- [ ] Parallel build management
- [ ] Build failure handling
### Week 9-12: OCI Registry Integration
- [ ] Registry structure design
- [ ] Package distribution system
- [ ] Registry client implementation
- [ ] Package verification
### Week 13-16: apt-layer Integration
- [ ] AKMODS commands implementation
- [ ] Atomic operations integration
- [ ] Live system support
- [ ] Transaction management
### Week 17-20: Module Categories
- [ ] Common modules implementation
- [ ] NVIDIA module support
- [ ] Gaming variant creation
- [ ] Module validation
### Week 21-24: Testing and Validation
- [ ] Automated testing framework
- [ ] CI/CD pipeline implementation
- [ ] Quality assurance procedures
- [ ] Performance optimization
### Week 25-28: Production Deployment
- [ ] Production infrastructure setup
- [ ] Documentation completion
- [ ] Community and support setup
- [ ] Production launch
## Risk Assessment and Mitigation
### Technical Risks
#### Risk: Kernel Compatibility Patching
- **Impact**: High
- **Probability**: High
- **Mitigation**: Implement hybrid patch management strategy (see `patch-strategy.md`)
- Community-driven patch sourcing as primary approach
- Dedicated team for critical patches
- Experimental automated patch generation
- Comprehensive patch database and validation framework
#### Risk: Build Process Complexity
- **Impact**: High
- **Probability**: Medium
- **Mitigation**: Start with simple modules, gradually increase complexity
#### Risk: OCI Registry Performance
- **Impact**: Medium
- **Probability**: Low
- **Mitigation**: Implement caching and CDN distribution
#### Risk: Signing Key Management
- **Impact**: High
- **Probability**: Medium
- **Mitigation**: Comprehensive security framework (see `security-signing.md`)
- HSM-based key management
- Secure boot integration with MOK
- Key rotation and monitoring procedures
- Incident response and recovery plans
#### Risk: Build Performance Targets
- **Impact**: Medium
- **Probability**: Medium
- **Mitigation**: Performance optimization strategy (see `performance-optimization.md`)
- Realistic performance expectations by module type
- Build farm architecture with specialized nodes
- Caching and parallel compilation optimization
- Continuous performance monitoring and improvement
### Operational Risks
#### Risk: Build Infrastructure Costs
- **Impact**: Medium
- **Probability**: Medium
- **Mitigation**: Optimize build processes and use efficient infrastructure
#### Risk: Community Adoption
- **Impact**: High
- **Probability**: Medium
- **Mitigation**: Provide clear migration path and comprehensive documentation
## Success Metrics
### Technical Metrics
- **Build Success Rate**: >95% successful builds
- **Installation Success Rate**: >98% successful installations
- **Build Time**:
- Simple modules: <10 minutes
- Complex modules: <30 minutes
- NVIDIA drivers: <45 minutes
- **Package Size**: <50MB per module package
- **Patch Availability**: >95% of kernel updates have patches within 48 hours
- **Security Compliance**: 100% of packages signed and verified
### Operational Metrics
- **System Uptime**: >99.9% registry availability
- **Response Time**: <5 seconds for package queries
- **User Satisfaction**: >90% positive feedback
- **Community Growth**: >100 active contributors
## Supporting Documentation
This implementation plan is supported by several detailed technical documents that address specific challenges and implementation details:
### Core Technical Documents
- **`patch-strategy.md`**: Comprehensive strategy for kernel compatibility patching, including community sourcing, automated generation, and dedicated team approaches
- **`security-signing.md`**: Detailed security framework covering MOK management, package signing, key rotation, and secure boot integration
- **`performance-optimization.md`**: Performance optimization strategies, realistic build time expectations, and infrastructure scaling approaches
### Additional Resources
- **`nvidia.md`**: NVIDIA driver preparation and management strategies
- **`run-file-processing.md`**: Technical guide for processing NVIDIA .run files into AKMODS packages
- **`patch-management.md`**: Detailed patch management and maintenance procedures
- **`dkms-integration.md`**: DKMS role clarification and hybrid architecture details
## Conclusion
This implementation plan provides a comprehensive roadmap for AKMODS in Particle-OS. By following this phased approach, we can ensure that each component is properly developed, tested, and integrated before moving to the next phase.
The key to success is maintaining focus on the core objectives while ensuring that each phase builds upon the previous one. Regular testing and validation throughout the development process will help identify and address issues early.
### Addressing Critical Challenges
The plan directly addresses the major challenges identified in the review:
1. **Kernel Compatibility Patching**: The hybrid patch management strategy provides multiple approaches to handle this critical bottleneck
2. **Signing Key Management**: Comprehensive security framework ensures proper key management and secure boot integration
3. **Build Performance**: Realistic performance expectations and optimization strategies for different module types
4. **Live System Support**: Clear distinction between temporary and permanent installations
5. **Source Package System**: Assumes the .run file processing pipeline is already defined and working
With proper execution of this plan, Particle-OS will have a robust, scalable, and user-friendly AKMODS system that provides the same level of kernel module management as uBlue-OS while maintaining compatibility with Ubuntu/Debian systems and Particle-OS's atomic architecture.
The hybrid approach of AKMODS pre-built packages with DKMS fallback ensures maximum compatibility and reliability, providing users with the best of both worlds: fast, reliable pre-built packages for common modules, and the flexibility of DKMS for edge cases and custom modules.

486
src/akmods-apt/readme.md Normal file
View file

@ -0,0 +1,486 @@
# Particle-OS Kernel Module Management: AKMODS Implementation
## Overview
This document outlines Particle-OS's implementation of **AKMODS (Automated Kernel MOdules)** for Ubuntu/Debian-based distributions, providing automated kernel module building and management similar to uBlue-OS's approach but adapted for the APT ecosystem. AKMODS provides pre-built kernel module packages distributed via OCI registry, with DKMS as a fallback for unsupported cases.
**Note: Particle-OS is currently focusing on DKMS integration for kernel module management. This AKMODS documentation is preserved for future consideration when infrastructure and resources allow for pre-built package distribution.**
## Why AKMODS for Particle-OS?
### Advantages of AKMODS Approach
Particle-OS has designed **AKMODS** as an advanced kernel module management system for future implementation. The advantages include:
1. **Package Integration**: Creates proper DEB packages for kernel modules
2. **Distribution Consistency**: Aligns with Particle-OS's atomic package management
3. **Pre-built Caching**: Enables caching of pre-built kernel module packages
4. **CI/CD Integration**: Automated builds for new kernels in container registry
5. **Atomic Operations**: Full integration with Particle-OS's atomic transaction system
### AKMODS vs DKMS Comparison
#### AKMODS (Particle-OS Implementation)
**Strengths:**
- **Pre-built Packages**: Creates and distributes pre-compiled DEB packages
- **OCI Distribution**: Packages distributed via container registry
- **Atomic Operations**: Full integration with Particle-OS atomic transactions
- **Live System Support**: Install modules without rebooting
- **Hybrid Architecture**: AKMODS as primary, DKMS as fallback
#### DKMS (Traditional Ubuntu Approach)
**Strengths:**
- **Mature Framework**: 20+ years of development and refinement
- **Automatic Rebuilding**: Automatically rebuilds modules for new kernels
- **Wide Adoption**: Industry standard for Ubuntu/Debian kernel modules
- **Flexibility**: Handles edge cases and custom modules
#### Hybrid Approach
Particle-OS has designed a **hybrid approach** where:
- **AKMODS** would provide pre-built packages for common modules (primary)
- **DKMS** would serve as fallback for unsupported or custom modules
- **Build-time DKMS** would be used within containers to compile modules
- **Runtime packages** would be pre-compiled and don't require DKMS
**Current Status**: Particle-OS is implementing DKMS integration first, with AKMODS planned for future development when infrastructure resources are available.
## Particle-OS AKMODS Architecture
### Core Components
#### 1. AKMODS Source Package System
```bash
# AKMODS source package structure
akmod-nvidia-source/
├── debian/
│ ├── control # Build dependencies and package metadata
│ ├── rules # Build instructions for kernel modules
│ └── akmods.conf # AKMODS-specific configuration
├── src/
│ └── nvidia-driver/ # Kernel module source code
└── patches/ # Kernel-specific patches
```
#### 2. AKMODS Build Daemon
```bash
# AKMODS daemon service
/etc/systemd/system/akmods-apt.service
/usr/local/bin/akmods-apt-daemon
/var/lib/apt-layer/akmods/
├── build-environments/ # Containerized build environments
├── cache/ # Pre-built module cache
└── logs/ # Build logs and status
```
#### 3. AKMODS Package Management
```bash
# AKMODS package installation
apt-layer --akmods-install nvidia-driver-535
# AKMODS package removal
apt-layer --akmods-remove virtualbox-dkms
# AKMODS status and management
apt-layer --akmods-status
apt-layer --akmods-rebuild-all
```
### AKMODS Integration with apt-layer
#### 1. AKMODS Commands
```bash
# AKMODS-specific commands
--akmods-status # Show AKMODS module status
--akmods-list # List installed AKMODS modules
--akmods-install <module> # Install AKMODS module
--akmods-remove <module> # Remove AKMODS module
--akmods-rebuild <module> # Rebuild AKMODS module
--akmods-rebuild-all # Rebuild all AKMODS modules
--akmods-clean <module> # Clean AKMODS build environment
--akmods-logs <module> # Show AKMODS build logs
--akmods-rollback # Rollback failed AKMODS installation
--akmods-fallback-dkms # Enable DKMS fallback for module
--akmods-migrate-dkms # Migrate from DKMS to AKMODS
```
#### 2. AKMODS Configuration System
```json
{
"akmods_enabled": true,
"auto_rebuild": true,
"build_environment": "container",
"kernel_headers_auto": true,
"rollback_on_failure": true,
"log_level": "info",
"build_timeout": 3600,
"max_parallel_builds": 2,
"cache_enabled": true,
"oci_registry": "registry.particle-os.org/akmods"
}
```
## AKMODS Implementation Strategy
### Phase 1: Core AKMODS Infrastructure
#### 1.1 AKMODS Package Format
```bash
# AKMODS DEB package structure
kmod-nvidia-535-5.15.0-56-generic_1.0_amd64.deb
├── control.tar.gz
│ ├── control # Package metadata
│ ├── postinst # Post-installation script
│ └── prerm # Pre-removal script
├── data.tar.gz
│ └── lib/modules/5.15.0-56-generic/extra/
│ └── nvidia.ko # Compiled kernel module
└── debian-binary
```
#### 1.2 AKMODS Build Process
```bash
# AKMODS build workflow
1. Source Package Extraction
├── Extract module source code
├── Apply kernel-specific patches
└── Set up build environment
2. Kernel Header Management
├── Install kernel headers for target kernel
├── Verify header compatibility
└── Set up build dependencies
3. Module Compilation
├── Compile module against kernel headers
├── Apply kernel module signing
└── Generate module metadata
4. Package Creation
├── Create DEB package structure
├── Add post-installation scripts
└── Generate package manifest
```
### Phase 2: Containerized AKMODS Builds
#### 2.1 AKMODS Build Containers
```dockerfile
# AKMODS build container
FROM ubuntu:24.04
# Install build dependencies
RUN apt-get update && apt-get install -y \
build-essential \
linux-headers-generic \
dkms \
git \
wget \
debhelper \
dh-dkms
# Set up AKMODS environment
ENV AKMODS_AUTOINSTALL=yes
ENV AKMODS_BUILD_TIMEOUT=3600
WORKDIR /build
```
#### 2.2 AKMODS Build Orchestration
```bash
# AKMODS build orchestration
akmods-apt-daemon build nvidia-driver-535 5.15.0-56-generic
├── Create build container
├── Mount source and kernel headers
├── Execute build process
├── Extract compiled modules
├── Create DEB package
└── Upload to OCI registry
```
### Phase 3: AKMODS Distribution System
#### 3.1 OCI Registry Integration
```bash
# AKMODS OCI registry structure
registry.particle-os.org/akmods/
├── nvidia-driver-535/
│ ├── 5.15.0-56-generic/
│ │ └── kmod-nvidia-535-5.15.0-56-generic_1.0_amd64.deb
│ └── 5.15.0-55-generic/
│ └── kmod-nvidia-535-5.15.0-55-generic_1.0_amd64.deb
└── virtualbox-dkms/
└── 6.1.38/
└── kmod-virtualbox-6.1.38-5.15.0-56-generic_1.0_amd64.deb
```
#### 3.2 AKMODS Package Distribution
```bash
# AKMODS package distribution workflow
1. Build Trigger
├── New kernel detected
├── AKMODS daemon triggered
└── Build queue updated
2. Package Building
├── Containerized build process
├── Module compilation and signing
└── DEB package creation
3. Registry Upload
├── Upload to OCI registry
├── Update package index
└── Notify distribution system
4. System Installation
├── Download from registry
├── Install via dpkg
└── Update module database
```
## AKMODS Module Categories
### 1. Common Modules
```yaml
common:
- v4l2loopback: Virtual video devices
- gpd-fan-kmod: GPD Win Max fan control
- nct6687d: AMD B550 chipset support
- ryzen-smu: AMD Ryzen SMU access
- system76: System76 laptop drivers
- zenergy: AMD energy monitoring
```
### 2. NVIDIA Modules
```yaml
nvidia:
- nvidia-driver-535: Closed proprietary drivers
- nvidia-driver-470: Legacy driver support
- nvidia-open: Open source drivers
```
### 3. Virtualization Modules
```yaml
virtualization:
- virtualbox-dkms: VirtualBox kernel modules
- vmware-dkms: VMware kernel modules
- kvm-dkms: KVM virtualization modules
```
### 4. Storage Modules
```yaml
storage:
- zfs-dkms: OpenZFS file system
- btrfs-dkms: Btrfs file system
- raid-dkms: RAID controller modules
```
## AKMODS Integration with Particle-OS Features
### 1. Atomic AKMODS Operations
```bash
# Atomic AKMODS installation
apt-layer --begin-transaction "akmods-install-nvidia"
apt-layer --akmods-install nvidia-driver-535
apt-layer --commit-transaction "akmods-install-nvidia"
# Rollback on failure
apt-layer --rollback-transaction "akmods-install-nvidia"
```
### 2. Live System AKMODS Support
```bash
# Install AKMODS modules on live system
apt-layer --live-install kmod-nvidia-535
# Commit live AKMODS changes
apt-layer --live-commit "Add NVIDIA AKMODS support"
```
### 3. OCI Export with AKMODS
```bash
# Export AKMODS-enabled layers
apt-layer --oci-export nvidia-gaming/24.04 particle-os/nvidia-gaming:latest
# Import AKMODS modules from registry
apt-layer --oci-import particle-os/nvidia-gaming:latest
```
## AKMODS Gaming Variants
### Particle-OS Bazzite Gaming (AKMODS)
```json
{
"name": "particle-os-bazzite-gaming-akmods",
"base": "ubuntu-base/25.04",
"description": "Gaming-focused variant with AKMODS NVIDIA support",
"akmods_modules": [
"nvidia-driver-535",
"v4l2loopback",
"gpd-fan-kmod"
],
"packages": [
"steam",
"wine",
"lutris",
"gamemode",
"mangohud"
],
"repositories": [
"nvidia",
"steam"
]
}
```
### Particle-OS Corona Gaming (AKMODS)
```json
{
"name": "particle-os-corona-gaming-akmods",
"base": "ubuntu-base/24.04",
"description": "KDE Plasma gaming variant with AKMODS NVIDIA support",
"akmods_modules": [
"nvidia-driver-535",
"v4l2loopback"
],
"packages": [
"kde-plasma-desktop",
"steam",
"wine",
"gamemode"
],
"repositories": [
"nvidia",
"steam"
]
}
```
## AKMODS vs uBlue-OS Comparison
### uBlue-OS AKMODS Strategy
- **Pre-built RPMs**: Caches pre-built akmod RPMs in container registry
- **Kernel Flavor Support**: Multiple kernel variants (standard, zen, bazzite)
- **Module Categories**: Common, nvidia, zfs, and specialized modules
- **CI/CD Pipeline**: Automated builds for new kernels
### Particle-OS AKMODS Strategy
- **Pre-built DEBs**: Caches pre-built akmod DEB packages in OCI registry
- **Kernel Variant Support**: Ubuntu kernel variants (generic, lowlatency, zen)
- **Module Categories**: Common, nvidia, virtualization, storage modules
- **CI/CD Pipeline**: Automated builds for new Ubuntu kernels
- **Atomic Integration**: Full integration with Particle-OS atomic operations
## Benefits of Particle-OS AKMODS
### 1. **Package Integration**
- Creates proper DEB packages for kernel modules
- Integrates with apt/dpkg package management
- Enables package distribution and caching
### 2. **Atomic Operations**
- All AKMODS operations are atomic with rollback support
- Failed installations don't break the immutable system
- Transaction-based approach ensures system stability
### 3. **OCI Distribution**
- Pre-built modules distributed via OCI registry
- Reduces build time on individual systems
- Enables enterprise deployment scenarios
### 4. **Live System Support**
- Install AKMODS modules without rebooting
- Live overlay system for temporary changes
- Commit changes as new layers
### 5. **Enterprise Features**
- Multi-tenant AKMODS module management
- Centralized AKMODS policy enforcement
- Automated AKMODS compliance reporting
## Migration from DKMS to AKMODS
### For Existing DKMS Users
```bash
# Migrate from DKMS to AKMODS
apt-layer --migrate-dkms-to-akmods
# Verify migration
apt-layer --akmods-status
# Remove old DKMS modules
apt-layer --dkms-remove-all
```
### For Developers
```bash
# Create AKMODS-enabled variants
apt-layer create-variant nvidia-gaming-akmods \
--base ubuntu-base/24.04 \
--akmods nvidia-driver-535 v4l2loopback \
--add steam wine
# Export as OCI image
apt-layer --oci-export nvidia-gaming-akmods/24.04 particle-os/nvidia-gaming-akmods:latest
```
## Future Enhancements
### 1. **Automated AKMODS Testing**
- Automated testing of AKMODS modules in containers
- Integration testing with different kernel versions
- Performance benchmarking of AKMODS modules
### 2. **Advanced NVIDIA Features**
- CUDA support for machine learning workloads
- Multi-GPU support for advanced gaming setups
- NVIDIA RTX features and ray tracing support
### 3. **Enterprise AKMODS Support**
- Corporate AKMODS module management
- Centralized AKMODS policy enforcement
- Automated AKMODS compliance reporting
## Current Implementation Status
Particle-OS is currently focusing on **DKMS integration** for kernel module management, which provides:
- **Immediate Availability**: Mature, proven solution ready for implementation
- **Ubuntu Native**: Deep integration with Ubuntu/Debian ecosystem
- **Lower Complexity**: No need for complex build infrastructure
- **Atomic Integration**: Full integration with Particle-OS atomic operations
- **User Familiarity**: Users already understand and trust DKMS
## Future AKMODS Implementation
The AKMODS design documented here represents Particle-OS's **future vision** for advanced kernel module management. When infrastructure resources and user demand align, AKMODS could provide:
- **Pre-built Packages**: Faster installation and better caching
- **OCI Distribution**: Enterprise-grade package distribution
- **Advanced Automation**: CI/CD pipeline for kernel module builds
- **Enhanced User Experience**: Simplified module management
## Conclusion
Particle-OS's AKMODS design provides a comprehensive roadmap for advanced kernel module management that combines the best aspects of uBlue-OS's akmods approach with Particle-OS's unique Ubuntu-based immutable architecture and atomic operations.
While currently focusing on DKMS integration, this AKMODS documentation preserves the technical vision and implementation details for future development when the project is ready to scale to pre-built package distribution.
This approach ensures that Particle-OS can start with proven, reliable kernel module management while maintaining a clear path toward advanced features that could compete effectively with uBlue-OS in the desktop gaming and professional workstation markets.
## Related Documentation
### Core Technical Documents
- [AKMODS Implementation Plan](plan.md) - Comprehensive 28-week implementation roadmap
- [Patch Management Strategy](patch-strategy.md) - Kernel compatibility patching approaches
- [Security and Signing](security-signing.md) - MOK management and package signing
- [Performance Optimization](performance-optimization.md) - Build optimization and scaling
- [DKMS Integration](dkms-integration.md) - Hybrid architecture details
### Additional Resources
- [NVIDIA Driver Guide](nvidia.md) - NVIDIA driver preparation and management
- [Run File Processing](run-file-processing.md) - NVIDIA .run file processing pipeline
- [Patch Management](patch-management.md) - Detailed patch maintenance procedures
### External References
- [uBlue-OS Kernel Analysis](../docs/ublue-os-kernel-analysis.md)
- [Particle-OS Main Documentation](../docs/README.md)

View file

@ -0,0 +1,500 @@
# NVIDIA .run File to AKMODS Conversion Process
## Overview
This document provides a ***theretical*** detailed technical explanation of how AKMODS processes official NVIDIA .run files to create proper kernel module packages for Particle-OS.
## NVIDIA .run File Structure
### File Contents
```bash
# NVIDIA .run file structure
NVIDIA-Linux-x86_64-535.154.05.run
├── Header (Self-extracting script)
├── Archive (Compressed driver components)
│ ├── kernel/ # Kernel modules
│ │ ├── nvidia.ko
│ │ ├── nvidia-modeset.ko
│ │ ├── nvidia-drm.ko
│ │ ├── nvidia-uvm.ko
│ │ └── nvidia-peermem.ko
│ ├── lib/ # User-space libraries
│ │ ├── libGL.so.1
│ │ ├── libnvidia-glcore.so.535.154.05
│ │ └── ...
│ ├── bin/ # Utilities
│ │ ├── nvidia-smi
│ │ ├── nvidia-settings
│ │ └── nvidia-xconfig
│ └── etc/ # Configuration files
│ ├── modprobe.d/
│ └── udev/rules.d/
└── Footer (Installation scripts)
```
## AKMODS .run File Processing Pipeline
### Phase 1: .run File Analysis and Extraction
#### Step 1: Download and Verification
```bash
# AKMODS download and verification process
akmods-apt-daemon download-nvidia-run 535.154.05
├── 1. Fetch .run file from NVIDIA servers
│ ├── URL: https://us.download.nvidia.com/XFree86/Linux-x86_64/535.154.05/
│ ├── File: NVIDIA-Linux-x86_64-535.154.05.run
│ └── Size: ~200-300MB
├── 2. Verify file integrity
│ ├── Check SHA256 checksum
│ ├── Verify file size
│ └── Validate file format
└── 3. Store in AKMODS cache
├── /var/lib/akmods-apt/cache/nvidia-runs/
└── NVIDIA-Linux-x86_64-535.154.05.run
```
#### Step 2: .run File Extraction
```bash
# AKMODS .run file extraction
akmods-apt-daemon extract-nvidia-run 535.154.05
├── 1. Create extraction environment
│ ├── Temporary directory: /tmp/akmods-nvidia-535/
│ ├── Mount point: /tmp/akmods-nvidia-535/mount/
│ └── Output directory: /tmp/akmods-nvidia-535/extracted/
├── 2. Extract .run file contents
│ ├── Execute: ./NVIDIA-Linux-x86_64-535.154.05.run --extract-only
│ ├── Extract to: /tmp/akmods-nvidia-535/extracted/
│ └── Preserve directory structure
├── 3. Analyze extracted contents
│ ├── Identify kernel modules
│ ├── Map user-space libraries
│ ├── Extract configuration files
│ └── Generate component manifest
└── 4. Clean up extraction environment
├── Remove temporary files
├── Preserve extracted contents
└── Update cache index
```
### Phase 2: Kernel Module Processing
#### Step 1: Module Analysis and Compatibility
```bash
# AKMODS kernel module analysis
akmods-apt-daemon analyze-nvidia-modules 535.154.05
├── 1. Identify kernel modules
│ ├── nvidia.ko (Main driver module)
│ ├── nvidia-modeset.ko (Mode setting)
│ ├── nvidia-drm.ko (DRM integration)
│ ├── nvidia-uvm.ko (Unified Virtual Memory)
│ └── nvidia-peermem.ko (Peer memory)
├── 2. Analyze module dependencies
│ ├── Check kernel version requirements
│ ├── Identify symbol dependencies
│ ├── Map module interdependencies
│ └── Generate dependency graph
├── 3. Compatibility assessment
│ ├── Test against target kernel versions
│ ├── Identify compatibility issues
│ ├── Generate compatibility matrix
│ └── Create patch requirements
└── 4. Generate module metadata
├── Module descriptions
├── Version information
├── Hardware compatibility
└── Kernel version support
```
#### Step 2: Kernel Compatibility Patching
```bash
# AKMODS kernel compatibility patching
akmods-apt-daemon patch-nvidia-modules 535.154.05 6.8.0-25-generic
├── 1. Identify required patches
│ ├── Kernel version: 6.8.0-25-generic
│ ├── Driver version: 535.154.05
│ ├── Check patch database
│ └── Generate patch list
├── 2. Apply kernel compatibility patches
│ ├── Patch: kernel-6.8.patch
│ ├── Patch: nvidia-535-kernel-6.8.patch
│ ├── Patch: drm-api-updates.patch
│ └── Verify patch application
├── 3. Module-specific patches
│ ├── nvidia.ko: kernel-api-changes.patch
│ ├── nvidia-drm.ko: drm-framework-updates.patch
│ ├── nvidia-uvm.ko: memory-management.patch
│ └── nvidia-peermem.ko: peer-memory-api.patch
└── 4. Validate patched modules
├── Check module compilation
├── Verify symbol resolution
├── Test module loading
└── Generate validation report
```
### Phase 3: AKMODS Source Package Creation
#### Step 1: Create AKMODS Source Package Structure
```bash
# AKMODS source package creation
akmods-apt-daemon create-akmods-source nvidia-driver-535 535.154.05
├── 1. Create package directory structure
│ ├── akmod-nvidia-535-source/
│ │ ├── debian/
│ │ │ ├── control
│ │ │ ├── rules
│ │ │ ├── akmods.conf
│ │ │ ├── postinst
│ │ │ ├── prerm
│ │ │ └── patches/
│ │ ├── src/
│ │ │ ├── nvidia-driver-535/
│ │ │ ├── kernel-modules/
│ │ │ └── user-libraries/
│ │ ├── scripts/
│ │ │ ├── extract-driver.sh
│ │ │ ├── apply-patches.sh
│ │ │ └── build-modules.sh
│ │ └── akmods.yaml
│ └── Generate package manifest
├── 2. Copy extracted components
│ ├── Copy kernel modules to src/kernel-modules/
│ ├── Copy user libraries to src/user-libraries/
│ ├── Copy utilities to src/bin/
│ └── Copy configuration files to src/etc/
└── 3. Create build configuration
├── Generate debian/control
├── Create debian/rules
├── Configure akmods.yaml
└── Set up build scripts
```
#### Step 2: Generate Debian Package Configuration
```bash
# debian/control generation
Package: kmod-nvidia-535
Version: 535.154.05-1
Architecture: amd64
Depends:
linux-headers-generic,
build-essential,
dkms,
nvidia-settings,
nvidia-prime
Description: NVIDIA proprietary drivers (535.154.05)
Kernel modules for NVIDIA graphics drivers version 535.154.05.
Supports RTX 40/30 series and compatible hardware.
# debian/rules generation
#!/usr/bin/make -f
%:
dh $@
override_dh_auto_install:
# Install kernel modules
mkdir -p debian/kmod-nvidia-535/lib/modules/$(shell uname -r)/extra/
cp src/kernel-modules/*.ko debian/kmod-nvidia-535/lib/modules/$(shell uname -r)/extra/
# Install user libraries
mkdir -p debian/kmod-nvidia-535/usr/lib/x86_64-linux-gnu/
cp src/user-libraries/*.so* debian/kmod-nvidia-535/usr/lib/x86_64-linux-gnu/
# Install utilities
mkdir -p debian/kmod-nvidia-535/usr/bin/
cp src/bin/* debian/kmod-nvidia-535/usr/bin/
override_dh_auto_clean:
rm -rf src/kernel-modules/*.ko
rm -rf src/user-libraries/*.so*
```
#### Step 3: Create AKMODS Configuration
```yaml
# akmods.yaml configuration
module: nvidia-driver-535
version: "535.154.05"
description: "NVIDIA proprietary drivers (RTX 40/30 series)"
source:
type: "nvidia-run"
url: "https://us.download.nvidia.com/XFree86/Linux-x86_64/535.154.05/NVIDIA-Linux-x86_64-535.154.05.run"
checksum: "sha256:abc123def456..."
extraction_method: "nvidia-installer"
build:
dependencies:
- "build-essential"
- "linux-headers-generic"
- "dkms"
- "fakeroot"
- "debhelper"
- "kernel-package"
patches:
- "kernel-6.8.patch"
- "nvidia-535-kernel-6.8.patch"
- "drm-api-updates.patch"
modules:
- name: "nvidia"
description: "NVIDIA kernel module"
dependencies: []
- name: "nvidia-modeset"
description: "NVIDIA mode setting module"
dependencies: ["nvidia"]
- name: "nvidia-drm"
description: "NVIDIA DRM module"
dependencies: ["nvidia", "nvidia-modeset"]
- name: "nvidia-uvm"
description: "NVIDIA Unified Virtual Memory"
dependencies: ["nvidia"]
- name: "nvidia-peermem"
description: "NVIDIA peer memory module"
dependencies: ["nvidia"]
post_install:
- "update-initramfs -u"
- "update-grub"
- "modprobe nvidia"
supported_kernels:
- "6.8.*"
- "6.7.*"
- "6.6.*"
- "6.5.*"
- "6.4.*"
hardware_support:
- "RTX 50 Series"
- "RTX 40 Series"
- "RTX 30 Series"
- "RTX 20 Series"
- "GTX 16 Series"
```
### Phase 4: Containerized Build Process
#### Step 1: Create Build Container
```dockerfile
# AKMODS NVIDIA build container
FROM ubuntu:24.04
# Install build dependencies
RUN apt-get update && apt-get install -y \
build-essential \
linux-headers-generic \
dkms \
git \
wget \
debhelper \
dh-dkms \
kernel-package \
fakeroot \
devscripts \
lintian \
pbuilder
# Install NVIDIA build tools
RUN apt-get install -y \
nvidia-kernel-source \
nvidia-kernel-common \
nvidia-modprobe
# Set up AKMODS environment
ENV AKMODS_AUTOINSTALL=yes
ENV AKMODS_BUILD_TIMEOUT=3600
ENV AKMODS_BUILD_JOBS=4
ENV AKMODS_KERNEL_VERSION=""
WORKDIR /build
# Copy AKMODS build scripts
COPY scripts/ /usr/local/bin/
RUN chmod +x /usr/local/bin/*.sh
```
#### Step 2: Build Process Execution
```bash
# AKMODS build process execution
akmods-apt-daemon build-nvidia-akmods nvidia-driver-535 6.8.0-25-generic
├── 1. Prepare build environment
│ ├── Create build container
│ ├── Mount source package
│ ├── Install kernel headers
│ └── Set up build environment
├── 2. Extract and patch modules
│ ├── Extract .run file contents
│ ├── Apply kernel compatibility patches
│ ├── Verify module compatibility
│ └── Prepare for compilation
├── 3. Compile kernel modules
│ ├── Compile nvidia.ko
│ ├── Compile nvidia-modeset.ko
│ ├── Compile nvidia-drm.ko
│ ├── Compile nvidia-uvm.ko
│ └── Compile nvidia-peermem.ko
├── 4. Create DEB package
│ ├── Package kernel modules
│ ├── Add post-installation scripts
│ ├── Generate package metadata
│ └── Sign package
└── 5. Upload to registry
├── Upload to OCI registry
├── Update package index
└── Notify completion
```
### Phase 5: Package Creation and Distribution
#### Step 1: DEB Package Assembly
```bash
# AKMODS DEB package creation
kmod-nvidia-535-6.8.0-25-generic_535.154.05-1_amd64.deb
├── control.tar.gz
│ ├── control # Package metadata
│ ├── postinst # Post-installation script
│ ├── prerm # Pre-removal script
│ └── triggers # Package triggers
├── data.tar.gz
│ └── lib/modules/6.8.0-25-generic/extra/
│ ├── nvidia.ko # Compiled kernel module
│ ├── nvidia-modeset.ko
│ ├── nvidia-drm.ko
│ ├── nvidia-uvm.ko
│ └── nvidia-peermem.ko
└── debian-binary
```
#### Step 2: Post-Installation Scripts
```bash
# debian/postinst script
#!/bin/bash
set -e
# Load kernel modules
modprobe nvidia || true
modprobe nvidia-modeset || true
modprobe nvidia-drm || true
# Update initramfs
update-initramfs -u -k $(uname -r) || true
# Update GRUB configuration
update-grub || true
# Create NVIDIA device nodes
nvidia-modprobe -u -c=0 || true
# Set up NVIDIA Prime if available
if command -v prime-select >/dev/null 2>&1; then
prime-select nvidia || true
fi
# Configure X11 if needed
if [ -f /etc/X11/xorg.conf ]; then
nvidia-xconfig --preserve-busid || true
fi
exit 0
```
#### Step 3: Package Distribution
```bash
# AKMODS package distribution workflow
akmods-apt-daemon distribute-nvidia-package kmod-nvidia-535-6.8.0-25-generic
├── 1. Package validation
│ ├── Verify package integrity
│ ├── Check module compatibility
│ ├── Validate package metadata
│ └── Test package installation
├── 2. Registry upload
│ ├── Upload to OCI registry
│ ├── Update package index
│ ├── Generate package manifest
│ └── Create distribution metadata
├── 3. System notification
│ ├── Notify available systems
│ ├── Schedule automatic updates
│ ├── Update package database
│ └── Log distribution event
└── 4. Cleanup
├── Remove temporary files
├── Update build cache
├── Archive build logs
└── Update statistics
```
## AKMODS .run File Processing Commands
### User Commands
```bash
# AKMODS .run file processing commands
akmods-apt-daemon process-nvidia-run 535.154.05
├── Download .run file
├── Extract and analyze
├── Create AKMODS source package
├── Build for current kernel
└── Upload to registry
akmods-apt-daemon build-nvidia-for-kernel 535.154.05 6.8.0-25-generic
├── Process .run file for specific kernel
├── Apply kernel compatibility patches
├── Build kernel modules
├── Create DEB package
└── Upload to registry
akmods-apt-daemon update-nvidia-from-run 535.154.05
├── Check for new .run file version
├── Process updated .run file
├── Build for all supported kernels
├── Update registry packages
└── Notify systems of updates
```
### System Integration
```bash
# System-level AKMODS integration
apt-layer --akmods-install nvidia-driver-535
├── Query registry for available packages
├── Download kmod-nvidia-535 package
├── Install via dpkg
├── Load kernel modules
└── Configure system
apt-layer --akmods-update-nvidia
├── Check for new NVIDIA drivers
├── Download and install updates
├── Rebuild for new kernels
└── Update system configuration
```
## Benefits of .run File Processing
### 1. **Latest Driver Availability**
- Access to NVIDIA's latest driver releases
- No dependency on Ubuntu packaging schedule
- Immediate availability of new features and fixes
### 2. **Automated Processing**
- No manual .run file installation required
- Automated kernel compatibility patching
- Consistent package format and distribution
### 3. **System Integration**
- Full integration with Particle-OS atomic operations
- Live system installation support
- Automatic rollback on failures
### 4. **Quality Assurance**
- Automated testing and validation
- Kernel compatibility verification
- Package integrity checking
### 5. **Distribution Efficiency**
- Pre-built packages in OCI registry
- Reduced build time on individual systems
- Centralized package management
## Conclusion
The AKMODS .run file processing pipeline provides a comprehensive solution for converting NVIDIA's official driver packages into proper kernel module packages for Particle-OS. This approach ensures that users have access to the latest NVIDIA drivers while maintaining system stability and providing seamless integration with Particle-OS's atomic architecture.
By automating the extraction, patching, building, and distribution processes, AKMODS eliminates the complexity and risks associated with manual .run file installation while providing the benefits of proper package management and system integration.

View file

@ -0,0 +1,336 @@
# AKMODS Security and Signing Key Management
## Overview
This document addresses the critical security aspects of AKMODS, including package signing, secure boot integration, and key management. Proper security implementation is essential for maintaining system integrity in an immutable OS environment.
## Secure Boot Integration
### MOK (Machine Owner Key) Management
#### Key Enrollment Process
```bash
# MOK enrollment workflow
mok_enrollment:
1. key_generation:
- Generate AKMODS signing key pair
- Create MOK certificate
- Prepare enrollment package
2. user_enrollment:
- User receives enrollment instructions
- User enrolls MOK during boot
- System validates MOK enrollment
3. key_activation:
- AKMODS packages signed with enrolled key
- Secure boot validates package signatures
- System boots with signed modules
```
#### Implementation Details
```bash
# MOK enrollment implementation
/usr/local/bin/akmods-mok-enroll:
├── Generate AKMODS signing key
│ ├── openssl genrsa -out akmods-private.key 4096
│ ├── openssl req -new -x509 -key akmods-private.key -out akmods-cert.pem
│ └── mokutil --import akmods-cert.pem
├── Create enrollment package
│ ├── Package MOK certificate
│ ├── Create enrollment script
│ └── Generate user instructions
└── User enrollment process
├── Reboot system
├── Enter MOK enrollment mode
├── Import AKMODS certificate
└── Verify enrollment success
```
### Secure Boot Chain
```bash
# Secure boot validation chain
secure_boot_chain:
bootloader:
- GRUB validates kernel signature
- Kernel validates initramfs signature
- Initramfs validates module signatures
akmods_integration:
- AKMODS modules signed with MOK key
- Kernel validates module signatures
- System boots with verified modules
```
## Package Signing Infrastructure
### Signing Key Management
#### Key Storage and Security
```bash
# Signing key infrastructure
key_management:
key_generation:
- Hardware Security Module (HSM) for production
- Air-gapped systems for key generation
- Multi-party key generation process
key_storage:
- Encrypted key storage in build environment
- Key rotation policies and procedures
- Backup and recovery procedures
key_distribution:
- Secure key distribution to build containers
- Key revocation and replacement procedures
- Audit logging for key usage
```
#### Build Container Key Integration
```bash
# Build container key management
container_key_integration:
key_provisioning:
- Mount signing keys as read-only volumes
- Use key management service (KMS) integration
- Implement key usage restrictions
signing_process:
- Sign modules during build process
- Sign packages after successful build
- Verify signatures before distribution
security_measures:
- Container isolation and security
- Key usage monitoring and logging
- Automatic key rotation
```
### Package Signing Process
```bash
# Package signing workflow
signing_workflow:
1. module_compilation:
- Compile kernel modules
- Sign individual modules with MOK key
- Verify module signatures
2. package_creation:
- Create DEB package structure
- Include signed modules
- Sign package manifest
3. distribution:
- Upload signed package to registry
- Verify package integrity
- Update package index
```
## Security Policies and Procedures
### Key Rotation Policy
```yaml
# Key rotation schedule
key_rotation:
frequency: "Annual rotation"
triggers:
- scheduled: "Yearly rotation"
- security_incident: "Immediate rotation"
- personnel_change: "Rotation on team changes"
rotation_process:
- generate_new_keys: "Create new key pair"
- update_build_system: "Deploy new keys"
- re_sign_packages: "Re-sign existing packages"
- update_mok: "Enroll new MOK certificate"
- revoke_old_keys: "Revoke old keys"
```
### Access Control
```bash
# Access control matrix
access_control:
key_management:
- admin_users: "Full key management access"
- build_operators: "Key usage for builds"
- auditors: "Read-only access for auditing"
build_environment:
- build_containers: "Temporary key access"
- registry_access: "Package upload permissions"
- monitoring_systems: "Log access for monitoring"
```
## Security Monitoring and Auditing
### Audit Logging
```bash
# Security audit logging
audit_logging:
key_operations:
- key_generation: "Log all key generation events"
- key_usage: "Log all key usage for signing"
- key_revocation: "Log key revocation events"
build_operations:
- package_signing: "Log all package signing"
- module_signing: "Log all module signing"
- distribution: "Log package distribution"
access_control:
- authentication: "Log all authentication events"
- authorization: "Log authorization decisions"
- privilege_escalation: "Log privilege changes"
```
### Security Monitoring
```yaml
# Security monitoring
security_monitoring:
real_time_alerts:
- unauthorized_key_usage: "Alert on unexpected key usage"
- failed_signatures: "Alert on signature verification failures"
- access_violations: "Alert on access control violations"
periodic_reviews:
- key_usage_review: "Monthly key usage review"
- access_review: "Quarterly access control review"
- security_audit: "Annual security audit"
```
## User Experience and Security
### One-Time Setup Process
```bash
# User setup process
user_setup:
1. initial_installation:
- Install Particle-OS with AKMODS
- Receive MOK enrollment instructions
- Download MOK enrollment package
2. mok_enrollment:
- Reboot system
- Enter MOK enrollment mode
- Import AKMODS certificate
- Verify enrollment
3. verification:
- Test AKMODS module installation
- Verify secure boot functionality
- Confirm system integrity
```
### Ongoing Security Management
```bash
# Ongoing security management
ongoing_management:
automatic_updates:
- AKMODS packages automatically signed
- Secure boot validation on updates
- Automatic rollback on signature failures
user_notifications:
- Notify users of key rotations
- Alert on security updates
- Provide security status information
```
## Compliance and Standards
### Security Standards Compliance
```yaml
# Security standards
compliance:
fips_140_2:
- cryptographic_modules: "Use FIPS 140-2 validated modules"
- key_generation: "FIPS-compliant key generation"
- signature_algorithms: "FIPS-approved algorithms"
common_criteria:
- security_target: "Define security requirements"
- evaluation: "Undergo Common Criteria evaluation"
- certification: "Obtain security certification"
iso_27001:
- information_security: "Implement ISMS"
- risk_management: "Risk assessment and treatment"
- continuous_improvement: "Regular security reviews"
```
## Risk Assessment and Mitigation
### Security Risks
```yaml
# Security risk assessment
security_risks:
key_compromise:
risk: "Signing key compromise"
impact: "High - could lead to malicious modules"
mitigation: "HSM protection, key rotation, monitoring"
unauthorized_access:
risk: "Unauthorized access to build system"
impact: "Medium - could lead to malicious packages"
mitigation: "Access controls, monitoring, isolation"
secure_boot_bypass:
risk: "Secure boot bypass attempts"
impact: "High - could compromise system integrity"
mitigation: "Secure boot enforcement, monitoring"
```
### Incident Response
```bash
# Security incident response
incident_response:
detection:
- automated_monitoring: "Real-time security monitoring"
- user_reports: "User-reported security issues"
- external_reports: "Security researcher reports"
response:
- immediate_containment: "Contain security incident"
- investigation: "Investigate incident details"
- remediation: "Implement security fixes"
- communication: "Communicate with users"
recovery:
- system_restoration: "Restore system security"
- key_rotation: "Rotate compromised keys"
- monitoring_enhancement: "Enhance security monitoring"
```
## Implementation Timeline
### Phase 1: Basic Security (Weeks 1-4)
- [ ] Implement basic package signing
- [ ] Create MOK enrollment process
- [ ] Set up basic access controls
- [ ] Implement audit logging
### Phase 2: Enhanced Security (Weeks 5-8)
- [ ] Deploy HSM for key management
- [ ] Implement key rotation procedures
- [ ] Enhance monitoring and alerting
- [ ] Create incident response procedures
### Phase 3: Advanced Security (Weeks 9-12)
- [ ] Implement advanced access controls
- [ ] Deploy security compliance measures
- [ ] Create security training materials
- [ ] Establish security governance
## Conclusion
Security and signing key management are critical components of AKMODS that ensure system integrity and user trust. The comprehensive approach outlined in this document provides:
1. **Secure Boot Integration**: Proper MOK management and secure boot chain
2. **Robust Key Management**: Secure key generation, storage, and rotation
3. **Comprehensive Monitoring**: Real-time security monitoring and auditing
4. **User-Friendly Experience**: Simple setup process with strong security
5. **Compliance Ready**: Standards compliance and risk management
This security framework ensures that AKMODS maintains the highest standards of security while providing a user-friendly experience for kernel module management in Particle-OS.

View file

@ -1,12 +1,34 @@
# Particle-OS apt-layer Tool - Changelog # apt-layer Tool - Changelog
All notable changes to the Particle-OS apt-layer Tool modular system will be documented in this file. All notable changes to the apt-layer Tool modular system will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased] ## [Unreleased]
### [2025-07-14 UTC] - NAMING STANDARDIZATION: REMOVED ALL PARTICLE-OS/UBLUE REFERENCES FROM PATHS
- **Complete path naming standardization**: Removed all references to "particle-os", "particle", "ublue", and "ucore" from path names throughout the entire codebase.
- **Consistent apt-layer naming**: All persistent and runtime paths now use only "apt-layer" naming:
- `/var/lib/apt-layer/` (was `/var/lib/particle-os/`)
- `/var/log/apt-layer/` (was `/var/log/particle-os/`)
- `/var/cache/apt-layer/` (was `/var/cache/particle-os/`)
- `/usr/local/etc/apt-layer/` (was `/usr/local/etc/particle-os/`)
- **Environment variable updates**: All environment variables updated to use `APT_LAYER_` prefix:
- `APT_LAYER_WORKSPACE` (was `PARTICLE_WORKSPACE`)
- `APT_LAYER_LOG_DIR` (was `PARTICLE_LOG_DIR`)
- `APT_LAYER_CACHE_DIR` (was `PARTICLE_CACHE_DIR`)
- **Configuration system updates**: Updated `paths.json` configuration to use `apt_layer_paths` key and consistent naming.
- **Function naming updates**: All system functions updated to use apt-layer naming:
- `initialize_apt_layer_system()` (was `initialize_particle_os_system()`)
- `show_apt_layer_system_status()` (was `show_particle_system_status()`)
- `reset_apt_layer_system()` (was `reset_particle_os_system()`)
- **Compile script updates**: Updated compile.sh to use apt-layer naming in all output messages and examples.
- **Live overlay fixes**: Fixed live overlay scriptlet to use correct path loading function (`load_path_config`).
- **Testing validation**: Successfully tested in WSL environment with all directories created under `/var/lib/apt-layer/`.
- **Documentation consistency**: All help text, examples, and user-facing messages updated to use apt-layer naming.
- **Result**: Clean, consistent naming throughout the entire codebase that matches the tool name "apt-layer" without any legacy references.
### [2025-07-14 UTC] - SCOPE REDUCTION: FOCUS ON CORE RPM-OSTREE FEATURES ONLY ### [2025-07-14 UTC] - SCOPE REDUCTION: FOCUS ON CORE RPM-OSTREE FEATURES ONLY
- **Scope reduction completed**: Archived all advanced, enterprise, cloud, multi-tenant, admin, compliance, and security features. - **Scope reduction completed**: Archived all advanced, enterprise, cloud, multi-tenant, admin, compliance, and security features.
- **Now focused on core rpm-ostree-like features for apt/Debian systems only:** - **Now focused on core rpm-ostree-like features for apt/Debian systems only:**
@ -23,7 +45,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- **Result**: Codebase is now a true rpm-ostree equivalent for apt/Debian systems, with no extra enterprise/cloud/advanced features. - **Result**: Codebase is now a true rpm-ostree equivalent for apt/Debian systems, with no extra enterprise/cloud/advanced features.
### [2025-01-27 23:58 UTC] - DOCUMENTATION UPDATES AND WORKSPACE CLEANUP COMPLETED ### [2025-01-27 23:58 UTC] - DOCUMENTATION UPDATES AND WORKSPACE CLEANUP COMPLETED
- **Comprehensive documentation updates completed**: Updated all major documentation files to reflect current Particle-OS capabilities and recent improvements. - **Comprehensive documentation updates completed**: Updated all major documentation files to reflect current apt-layer capabilities and recent improvements.
- **Main documentation files updated**: Updated multiple documentation files to reflect new apt-layer atomic OSTree workflow and official ComposeFS tool integration: - **Main documentation files updated**: Updated multiple documentation files to reflect new apt-layer atomic OSTree workflow and official ComposeFS tool integration:
- `tools.md` - Updated to reflect atomic OSTree workflow, official ComposeFS integration, and overlay/dpkg improvements - `tools.md` - Updated to reflect atomic OSTree workflow, official ComposeFS integration, and overlay/dpkg improvements
- `TODO.md` - Updated completion status and added new priorities for compilation system enhancements - `TODO.md` - Updated completion status and added new priorities for compilation system enhancements
@ -59,7 +81,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- **Note**: Documentation is now fully up-to-date and workspace is clean and organized. All recent improvements including OSTree atomic workflow, official ComposeFS integration, and overlay/dpkg improvements are properly documented across all project files. - **Note**: Documentation is now fully up-to-date and workspace is clean and organized. All recent improvements including OSTree atomic workflow, official ComposeFS integration, and overlay/dpkg improvements are properly documented across all project files.
### [2025-01-27 23:55 UTC] - DKMS TESTING INFRASTRUCTURE COMPLETED ### [2025-01-27 23:55 UTC] - DKMS TESTING INFRASTRUCTURE COMPLETED
- **DKMS testing infrastructure implemented**: Created comprehensive DKMS testing system to validate all DKMS functionality in Particle-OS. - **DKMS testing infrastructure implemented**: Created comprehensive DKMS testing system to validate all DKMS functionality in apt-layer.
- **Comprehensive test suite created**: Created `test-dkms-functionality.sh` with 12 comprehensive test cases covering all DKMS functionality: - **Comprehensive test suite created**: Created `test-dkms-functionality.sh` with 12 comprehensive test cases covering all DKMS functionality:
- **Test 1**: DKMS status command validation - **Test 1**: DKMS status command validation
- **Test 2**: DKMS list command validation - **Test 2**: DKMS list command validation
@ -108,10 +130,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Proper error handling for missing dependencies - Proper error handling for missing dependencies
- Clear instructions for test execution - Clear instructions for test execution
- Comprehensive validation of all DKMS functionality - Comprehensive validation of all DKMS functionality
- **Note**: DKMS testing infrastructure is now complete and ready for validation on target systems. The test suite provides comprehensive coverage of all DKMS functionality implemented in Particle-OS, ensuring reliability and proper operation of DKMS and NVIDIA support features. - **Note**: DKMS testing infrastructure is now complete and ready for validation on target systems. The test suite provides comprehensive coverage of all DKMS functionality implemented in apt-layer, ensuring reliability and proper operation of DKMS and NVIDIA support features.
### [2025-01-27 23:50 UTC] - DOCUMENTATION COMPLETION AND COMPILATION ENHANCEMENTS PLANNED ### [2025-01-27 23:50 UTC] - DOCUMENTATION COMPLETION AND COMPILATION ENHANCEMENTS PLANNED
- **Documentation work completed**: All documentation tasks have been successfully completed for Particle-OS. - **Documentation work completed**: All documentation tasks have been successfully completed for apt-layer.
- **Main README comprehensive update**: Added complete DKMS and NVIDIA support documentation to main README.md: - **Main README comprehensive update**: Added complete DKMS and NVIDIA support documentation to main README.md:
- Comprehensive DKMS features and capabilities overview - Comprehensive DKMS features and capabilities overview
- Complete NVIDIA driver support documentation with graphics-drivers PPA integration - Complete NVIDIA driver support documentation with graphics-drivers PPA integration
@ -242,11 +264,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Transaction logging and audit trails - Transaction logging and audit trails
- User permission validation - User permission validation
- Comprehensive error handling and recovery - Comprehensive error handling and recovery
- **Gaming variant support**: Prepared infrastructure for Particle-OS gaming variants: - **Gaming variant support**: Prepared infrastructure for apt-layer gaming variants:
- Particle-OS Bazzite Gaming (NVIDIA) - Ubuntu 25.04 with gaming optimizations - apt-layer Bazzite Gaming (NVIDIA) - Ubuntu 25.04 with gaming optimizations
- Particle-OS Corona Gaming (NVIDIA) - Ubuntu 24.04 LTS with KDE Plasma - apt-layer Corona Gaming (NVIDIA) - Ubuntu 24.04 LTS with KDE Plasma
- Gaming performance tuning and Steam/Wine integration - Gaming performance tuning and Steam/Wine integration
- **Note**: DKMS and NVIDIA support functions are implemented in the advanced package management system but command-line interface integration is pending. This provides the foundation for full DKMS and NVIDIA support in Particle-OS variants. - **Note**: DKMS and NVIDIA support functions are implemented in the advanced package management system but command-line interface integration is pending. This provides the foundation for full DKMS and NVIDIA support in apt-layer variants.
### [2025-01-27 22:00 UTC] - ROOT PRIVILEGE MANAGEMENT IMPLEMENTED ### [2025-01-27 22:00 UTC] - ROOT PRIVILEGE MANAGEMENT IMPLEMENTED
- **Root privilege management implemented**: Added comprehensive privilege checking system to enforce proper security practices. - **Root privilege management implemented**: Added comprehensive privilege checking system to enforce proper security practices.
@ -774,22 +796,22 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Usage Examples ### Usage Examples
```bash ```bash
# Create traditional layer # Create traditional layer
sudo ./apt-layer.sh ubuntu-ublue/base/24.04 ubuntu-ublue/gaming/24.04 steam wine sudo ./apt-layer.sh ubuntu-base/24.04 gaming/24.04 steam wine
# Create container-based layer (Apx-style) with ComposeFS base # Create container-based layer (Apx-style) with ComposeFS base
sudo ./apt-layer.sh --container ubuntu-ublue/base/24.04 ubuntu-ublue/dev/24.04 vscode git sudo ./apt-layer.sh --container ubuntu-base/24.04 dev/24.04 vscode git
# Create container-based layer with OCI base image # Create container-based layer with OCI base image
sudo ./apt-layer.sh --container ubuntu:24.04 ubuntu-ublue/custom/24.04 my-package sudo ./apt-layer.sh --container ubuntu:24.04 custom/24.04 my-package
# List images # List images
sudo ./apt-layer.sh --list sudo ./apt-layer.sh --list
# Show image information # Show image information
sudo ./apt-layer.sh --info ubuntu-ublue/gaming/24.04 sudo ./apt-layer.sh --info gaming/24.04
# Remove image # Remove image
sudo ./apt-layer.sh --remove ubuntu-ublue/gaming/24.04 sudo ./apt-layer.sh --remove gaming/24.04
``` ```
--- ---

View file

@ -0,0 +1,155 @@
#!/bin/bash
# apt-layer Installer Compiler
# Compiles install-apt-layer.sh with embedded paths.json configuration
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_header() {
echo -e "${BLUE}================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}================================${NC}"
}
# Get script directory and project root
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$(dirname "$SCRIPT_DIR")")"
CONFIG_FILE="$SCRIPT_DIR/config/paths.json"
TEMPLATE_FILE="$SCRIPT_DIR/templates/install-apt-layer.template.sh"
OUTPUT_FILE="$PROJECT_ROOT/install-apt-layer.sh"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-o|--output)
OUTPUT_FILE="$2"
shift 2
;;
-h|--help)
echo "Usage: $0 [-o|--output OUTPUT_PATH]"
echo " -o, --output Specify output file path (default: ../../install-apt-layer.sh)"
echo " -h, --help Show this help message"
exit 0
;;
*)
print_error "Unknown option: $1"
echo "Use -h or --help for usage information"
exit 1
;;
esac
done
print_header "apt-layer Installer Compiler"
# Check if config file exists
if [[ ! -f "$CONFIG_FILE" ]]; then
print_error "Configuration file not found: $CONFIG_FILE"
exit 1
fi
# Check if template file exists
if [[ ! -f "$TEMPLATE_FILE" ]]; then
print_error "Template file not found: $TEMPLATE_FILE"
exit 1
fi
# Validate JSON configuration
print_status "Validating paths.json configuration..."
if ! jq empty "$CONFIG_FILE" 2>/dev/null; then
print_error "Invalid JSON in configuration file: $CONFIG_FILE"
exit 1
fi
print_status "✅ Configuration validation passed"
# Ensure output directory exists
OUTPUT_DIR="$(dirname "$OUTPUT_FILE")"
if [[ ! -d "$OUTPUT_DIR" ]]; then
print_status "Creating output directory: $OUTPUT_DIR"
mkdir -p "$OUTPUT_DIR"
fi
# Read the template file
print_status "Reading template file..."
template_content=$(cat "$TEMPLATE_FILE")
# Extract the JSON configuration
print_status "Extracting current paths.json configuration..."
current_config=$(cat "$CONFIG_FILE")
# Create the compiled installer
print_status "Compiling installer with embedded configuration..."
# Create temporary file for processing
temp_file=$(mktemp)
# Process the template and replace the placeholder
{
# Read template line by line
while IFS= read -r line; do
# Check if this line starts the here document
if [[ "$line" == *"sudo tee \"\$config_dir/paths.json\""* ]]; then
echo "$line"
# Skip the next line (PATHS_JSON_PLACEHOLDER)
read -r placeholder_line
# Output the current configuration
echo "$current_config"
else
echo "$line"
fi
done < "$TEMPLATE_FILE"
} > "$temp_file"
# Move to final location
mv "$temp_file" "$OUTPUT_FILE"
# Make it executable
chmod +x "$OUTPUT_FILE"
# Validate the compiled script
print_status "Validating compiled installer..."
if bash -n "$OUTPUT_FILE"; then
print_status "✅ Syntax validation passed"
else
print_error "❌ Syntax validation failed"
print_error "Removing invalid script: $OUTPUT_FILE"
rm -f "$OUTPUT_FILE"
exit 1
fi
print_header "Installer Compilation Complete!"
print_status "Output file: $OUTPUT_FILE"
print_status "File size: $(du -h "$OUTPUT_FILE" | cut -f1)"
print_status "Lines of code: $(wc -l < "$OUTPUT_FILE")"
print_status ""
print_status "The compiled install-apt-layer.sh now includes:"
print_status "- Latest paths.json configuration embedded"
print_status "- All installation and uninstallation functionality"
print_status "- Dependency checking and installation"
print_status "- System initialization"
print_status "- Proper error handling and validation"
print_status ""
print_status "Ready for distribution! 🚀"

View file

@ -1,8 +1,8 @@
#!/bin/bash #!/bin/bash
# Particle-OS apt-layer Compiler # apt-layer Compiler
# Merges multiple scriptlets into a single self-contained apt-layer.sh # Merges multiple scriptlets into a single self-contained apt-layer.sh
# Based on ParticleOS installer compile.sh and ComposeFS compile.sh # Based on apt-layer installer compile.sh and ComposeFS compile.sh
set -euo pipefail set -euo pipefail
@ -156,7 +156,7 @@ if [[ ! -d "$OUTPUT_DIR" ]]; then
mkdir -p "$OUTPUT_DIR" mkdir -p "$OUTPUT_DIR"
fi fi
print_header "Particle-OS apt-layer Compiler" print_header "apt-layer Compiler"
# Check dependencies first # Check dependencies first
check_dependencies check_dependencies
@ -191,14 +191,14 @@ header="#!/bin/bash
# WARNING: This file is automatically generated # # WARNING: This file is automatically generated #
# DO NOT modify this file directly as it will be overwritten # # DO NOT modify this file directly as it will be overwritten #
# # # #
# Particle-OS apt-layer Tool # # apt-layer Tool #
# Generated on: $(date '+%Y-%m-%d %H:%M:%S') # # Generated on: $(date '+%Y-%m-%d %H:%M:%S') #
# # # #
################################################################################################################ ################################################################################################################
set -euo pipefail set -euo pipefail
# Particle-OS apt-layer Tool - Self-contained version # apt-layer Tool - Self-contained version
# This script contains all components merged into a single file # This script contains all components merged into a single file
# Enhanced version with container support, multiple package managers, and LIVE SYSTEM LAYERING # Enhanced version with container support, multiple package managers, and LIVE SYSTEM LAYERING
# Inspired by Vanilla OS Apx approach, ParticleOS apt-layer, and rpm-ostree live layering # Inspired by Vanilla OS Apx approach, ParticleOS apt-layer, and rpm-ostree live layering
@ -210,7 +210,7 @@ script_content+=("$header")
# Add version info # Add version info
update_progress "Adding: Version" 10 update_progress "Adding: Version" 10
version_info="# Version: $(date '+%y.%m.%d') version_info="# Version: $(date '+%y.%m.%d')
# Particle-OS apt-layer Tool # apt-layer Tool
# Enhanced with Container Support and LIVE SYSTEM LAYERING # Enhanced with Container Support and LIVE SYSTEM LAYERING
" "
@ -267,16 +267,16 @@ log_transaction() {
" "
script_content+=("$fallback_logging") script_content+=("$fallback_logging")
# Add Particle-OS configuration sourcing # Add apt-layer configuration sourcing
update_progress "Adding: Configuration Sourcing" 12 update_progress "Adding: Configuration Sourcing" 12
config_sourcing="# Source Particle-OS configuration (if available, skip for help commands) config_sourcing="# Source apt-layer configuration (if available, skip for help commands)
# Skip configuration loading for help commands to avoid permission issues # Skip configuration loading for help commands to avoid permission issues
if [[ \"\${1:-}\" != \"--help\" && \"\${1:-}\" != \"-h\" && \"\${1:-}\" != \"--help-full\" && \"\${1:-}\" != \"--examples\" ]]; then if [[ \"\${1:-}\" != \"--help\" && \"\${1:-}\" != \"-h\" && \"\${1:-}\" != \"--help-full\" && \"\${1:-}\" != \"--examples\" ]]; then
if [[ -f \"/usr/local/etc/particle-config.sh\" ]]; then if [[ -f \"/usr/local/etc/apt-layer-config.sh\" ]]; then
source \"/usr/local/etc/particle-config.sh\" source \"/usr/local/etc/apt-layer-config.sh\"
log_info \"Loaded Particle-OS configuration\" \"apt-layer\" log_info \"Loaded apt-layer configuration\" \"apt-layer\"
else else
log_warning \"Particle-OS configuration not found, using defaults\" \"apt-layer\" log_warning \"apt-layer configuration not found, using defaults\" \"apt-layer\"
fi fi
else else
log_info \"Skipping configuration loading for help command\" \"apt-layer\" log_info \"Skipping configuration loading for help command\" \"apt-layer\"
@ -349,23 +349,26 @@ add_scriptlet "06-oci-integration.sh" "OCI Export/Import Integration"
update_progress "Adding: Bootloader Integration" 46 update_progress "Adding: Bootloader Integration" 46
add_scriptlet "07-bootloader.sh" "Bootloader Integration (UEFI/GRUB/systemd-boot)" add_scriptlet "07-bootloader.sh" "Bootloader Integration (UEFI/GRUB/systemd-boot)"
update_progress "Adding: Atomic Deployment System" 51 update_progress "Adding: System Initialization" 48
add_scriptlet "08-system-init.sh" "System Initialization and Path Management"
update_progress "Adding: Atomic Deployment System" 52
add_scriptlet "09-atomic-deployment.sh" "Atomic Deployment System" add_scriptlet "09-atomic-deployment.sh" "Atomic Deployment System"
update_progress "Adding: rpm-ostree Compatibility" 56 update_progress "Adding: rpm-ostree Compatibility" 57
add_scriptlet "10-rpm-ostree-compat.sh" "rpm-ostree Compatibility Layer" add_scriptlet "10-rpm-ostree-compat.sh" "rpm-ostree Compatibility Layer"
update_progress "Adding: OSTree Atomic Package Management" 61 update_progress "Adding: OSTree Atomic Package Management" 62
add_scriptlet "15-ostree-atomic.sh" "OSTree Atomic Package Management" add_scriptlet "15-ostree-atomic.sh" "OSTree Atomic Package Management"
update_progress "Adding: Direct dpkg Installation" 66 update_progress "Adding: Direct dpkg Installation" 67
add_scriptlet "24-dpkg-direct-install.sh" "Direct dpkg Installation (Performance Optimization)" add_scriptlet "24-dpkg-direct-install.sh" "Direct dpkg Installation (Performance Optimization)"
update_progress "Adding: Main Dispatch" 71 update_progress "Adding: Main Dispatch" 72
add_scriptlet "99-main.sh" "Main Dispatch and Help" "true" add_scriptlet "99-main.sh" "Main Dispatch and Help" "true"
# Add embedded configuration files if they exist # Add embedded configuration files if they exist
update_progress "Adding: Embedded Configuration" 98 update_progress "Adding: Embedded Configuration" 99
if [[ -d "$SCRIPT_DIR/config" ]]; then if [[ -d "$SCRIPT_DIR/config" ]]; then
script_content+=("# ============================================================================") script_content+=("# ============================================================================")
script_content+=("# Embedded Configuration Files") script_content+=("# Embedded Configuration Files")
@ -491,7 +494,7 @@ print_status "Lines of code: $(wc -l < "$OUTPUT_FILE")"
print_status "" print_status ""
print_status "The compiled apt-layer.sh is now self-contained and includes:" print_status "The compiled apt-layer.sh is now self-contained and includes:"
print_status "- Particle-OS configuration integration" print_status "- apt-layer configuration integration"
print_status "- Transactional operations with automatic rollback" print_status "- Transactional operations with automatic rollback"
print_status "- Traditional chroot-based layer creation" print_status "- Traditional chroot-based layer creation"
print_status "- Container-based layer creation (Apx-style)" print_status "- Container-based layer creation (Apx-style)"
@ -506,12 +509,12 @@ print_status "- Comprehensive JSON configuration system"
print_status "- Direct dpkg installation (Performance optimization)" print_status "- Direct dpkg installation (Performance optimization)"
print_status "- All dependencies merged into a single file" print_status "- All dependencies merged into a single file"
print_status "" print_status ""
print_status "Particle-OS apt-layer compilation complete with all core rpm-ostree-like features!" print_status "apt-layer compilation complete with all core rpm-ostree-like features!"
print_status "" print_status ""
print_status "Usage:" print_status "Usage:"
print_status " sudo ./apt-layer.sh particle-os/base/24.04 particle-os/gaming/24.04 steam wine" print_status " sudo ./apt-layer.sh ubuntu-base/24.04 gaming/24.04 steam wine"
print_status " sudo ./apt-layer.sh --container particle-os/base/24.04 particle-os/dev/24.04 vscode git" print_status " sudo ./apt-layer.sh --container ubuntu-base/24.04 dev/24.04 vscode git"
print_status " sudo ./apt-layer.sh --live-overlay start" print_status " sudo ./apt-layer.sh --live-overlay start"
print_status " sudo ./apt-layer.sh --live-install steam wine" print_status " sudo ./apt-layer.sh --live-install steam wine"
print_status " sudo ./apt-layer.sh --live-overlay commit \"Add gaming packages\"" print_status " sudo ./apt-layer.sh --live-overlay commit \"Add gaming packages\""

View file

@ -0,0 +1,138 @@
{
"apt_layer_paths": {
"description": "apt-layer system path configuration",
"version": "1.0",
"main_directories": {
"workspace": {
"path": "/var/lib/apt-layer",
"description": "Main apt-layer workspace directory",
"permissions": "755",
"owner": "root:root"
},
"logs": {
"path": "/var/log/apt-layer",
"description": "apt-layer log files directory",
"permissions": "755",
"owner": "root:root"
},
"cache": {
"path": "/var/cache/apt-layer",
"description": "apt-layer cache directory",
"permissions": "755",
"owner": "root:root"
}
},
"workspace_subdirectories": {
"build": {
"path": "/var/lib/apt-layer/build",
"description": "Build artifacts and temporary files",
"permissions": "755",
"owner": "root:root"
},
"live_overlay": {
"path": "/var/lib/apt-layer/live-overlay",
"description": "Live overlay system for temporary changes",
"permissions": "755",
"owner": "root:root"
},
"composefs": {
"path": "/var/lib/apt-layer/composefs",
"description": "ComposeFS layers and metadata",
"permissions": "755",
"owner": "root:root"
},
"ostree_commits": {
"path": "/var/lib/apt-layer/ostree-commits",
"description": "OSTree commit history and metadata",
"permissions": "755",
"owner": "root:root"
},
"deployments": {
"path": "/var/lib/apt-layer/deployments",
"description": "Deployment directories and state",
"permissions": "755",
"owner": "root:root"
},
"history": {
"path": "/var/lib/apt-layer/history",
"description": "Deployment history and rollback data",
"permissions": "755",
"owner": "root:root"
},
"bootloader": {
"path": "/var/lib/apt-layer/bootloader",
"description": "Bootloader state and configuration",
"permissions": "755",
"owner": "root:root"
},
"transaction_state": {
"path": "/var/lib/apt-layer/transaction-state",
"description": "Transaction state and temporary data",
"permissions": "755",
"owner": "root:root"
}
},
"files": {
"deployment_db": {
"path": "/var/lib/apt-layer/deployments.json",
"description": "Deployment database file",
"permissions": "644",
"owner": "root:root"
},
"current_deployment": {
"path": "/var/lib/apt-layer/current-deployment",
"description": "Current deployment identifier file",
"permissions": "644",
"owner": "root:root"
},
"pending_deployment": {
"path": "/var/lib/apt-layer/pending-deployment",
"description": "Pending deployment identifier file",
"permissions": "644",
"owner": "root:root"
},
"transaction_log": {
"path": "/var/lib/apt-layer/transaction.log",
"description": "Transaction log file",
"permissions": "644",
"owner": "root:root"
}
},
"fallback_paths": {
"description": "Fallback paths for different environments",
"wsl": {
"workspace": "/mnt/wsl/apt-layer",
"logs": "/mnt/wsl/apt-layer/logs",
"cache": "/mnt/wsl/apt-layer/cache"
},
"container": {
"workspace": "/tmp/apt-layer",
"logs": "/tmp/apt-layer/logs",
"cache": "/tmp/apt-layer/cache"
},
"test": {
"workspace": "/tmp/apt-layer-test",
"logs": "/tmp/apt-layer-test/logs",
"cache": "/tmp/apt-layer-test/cache"
}
},
"environment_variables": {
"description": "Environment variable mappings",
"APT_LAYER_WORKSPACE": "workspace",
"APT_LAYER_LOG_DIR": "logs",
"APT_LAYER_CACHE_DIR": "cache",
"BUILD_DIR": "workspace_subdirectories.build",
"LIVE_OVERLAY_DIR": "workspace_subdirectories.live_overlay",
"COMPOSEFS_DIR": "workspace_subdirectories.composefs",
"OSTREE_COMMITS_DIR": "workspace_subdirectories.ostree_commits",
"DEPLOYMENTS_DIR": "workspace_subdirectories.deployments",
"HISTORY_DIR": "workspace_subdirectories.history",
"BOOTLOADER_STATE_DIR": "workspace_subdirectories.bootloader",
"TRANSACTION_STATE": "workspace_subdirectories.transaction_state",
"DEPLOYMENT_DB": "files.deployment_db",
"CURRENT_DEPLOYMENT_FILE": "files.current_deployment",
"PENDING_DEPLOYMENT_FILE": "files.pending_deployment",
"TRANSACTION_LOG": "files.transaction_log"
}
}
}

View file

@ -1,119 +1,127 @@
#!/bin/bash #!/bin/bash
# Transaction management for Particle-OS apt-layer Tool # Transaction management for apt-layer Tool
# Provides atomic operations with automatic rollback and recovery # Provides atomic operations with automatic rollback and recovery
# System initialization functions # System initialization functions
initialize_particle_os_system() { initialize_apt_layer_system() {
log_info "Initializing Particle-OS system..." "apt-layer" log_info "Initializing apt-layer system..." "apt-layer"
# Create configuration directory # Use the new system initialization function if available
mkdir -p "/usr/local/etc/particle-os" if command -v init_apt_layer_system >/dev/null 2>&1; then
init_apt_layer_system
else
# Fallback to basic initialization
log_info "Using fallback initialization..." "apt-layer"
# Create workspace directory # Create configuration directory
mkdir -p "/var/lib/particle-os" mkdir -p "/usr/local/etc/apt-layer"
# Create log directory # Create workspace directory
mkdir -p "/var/log/particle-os" mkdir -p "/var/lib/apt-layer"
# Create cache directory # Create log directory
mkdir -p "/var/cache/particle-os" mkdir -p "/var/log/apt-layer"
# Create configuration file if it doesn't exist # Create cache directory
if [[ ! -f "/usr/local/etc/particle-config.sh" ]]; then mkdir -p "/var/cache/apt-layer"
create_default_configuration
# Create configuration file if it doesn't exist
if [[ ! -f "/usr/local/etc/apt-layer-config.sh" ]]; then
create_default_configuration
fi
# Set proper permissions
chmod 755 "/var/lib/apt-layer"
chmod 755 "/var/log/apt-layer"
chmod 755 "/var/cache/apt-layer"
chmod 644 "/usr/local/etc/apt-layer-config.sh"
log_success "apt-layer system initialized successfully (fallback)" "apt-layer"
fi fi
# Set proper permissions
chmod 755 "/var/lib/particle-os"
chmod 755 "/var/log/particle-os"
chmod 755 "/var/cache/particle-os"
chmod 644 "/usr/local/etc/particle-config.sh"
log_success "Particle-OS system initialized successfully" "apt-layer"
} }
create_default_configuration() { create_default_configuration() {
log_info "Creating default configuration..." "apt-layer" log_info "Creating default configuration..." "apt-layer"
cat > "/usr/local/etc/particle-config.sh" << 'EOF' cat > "/usr/local/etc/apt-layer-config.sh" << 'EOF'
#!/bin/bash #!/bin/bash
# Particle-OS Configuration File # apt-layer Configuration File
# Generated automatically on $(date) # Generated automatically on $(date)
# Core paths # Core paths
export PARTICLE_WORKSPACE="/var/lib/particle-os" export APT_LAYER_WORKSPACE="/var/lib/apt-layer"
export PARTICLE_CONFIG_DIR="/usr/local/etc/particle-os" export APT_LAYER_CONFIG_DIR="/usr/local/etc/apt-layer"
export PARTICLE_LOG_DIR="/var/log/particle-os" export APT_LAYER_LOG_DIR="/var/log/apt-layer"
export PARTICLE_CACHE_DIR="/var/cache/particle-os" export APT_LAYER_CACHE_DIR="/var/cache/apt-layer"
# Build and temporary directories # Build and temporary directories
export PARTICLE_BUILD_DIR="$PARTICLE_WORKSPACE/build" export APT_LAYER_BUILD_DIR="$APT_LAYER_WORKSPACE/build"
export PARTICLE_TEMP_DIR="$PARTICLE_WORKSPACE/temp" export APT_LAYER_TEMP_DIR="$APT_LAYER_WORKSPACE/temp"
export PARTICLE_BACKUP_DIR="$PARTICLE_WORKSPACE/backup" export APT_LAYER_BACKUP_DIR="$APT_LAYER_WORKSPACE/backup"
# Layer management # Layer management
export PARTICLE_LAYERS_DIR="$PARTICLE_WORKSPACE/layers" export APT_LAYER_LAYERS_DIR="$APT_LAYER_WORKSPACE/layers"
export PARTICLE_IMAGES_DIR="$PARTICLE_WORKSPACE/images" export APT_LAYER_IMAGES_DIR="$APT_LAYER_WORKSPACE/images"
export PARTICLE_MOUNTS_DIR="$PARTICLE_WORKSPACE/mounts" export APT_LAYER_MOUNTS_DIR="$APT_LAYER_WORKSPACE/mounts"
# ComposeFS integration # ComposeFS integration
export PARTICLE_COMPOSEFS_DIR="$PARTICLE_WORKSPACE/composefs" export APT_LAYER_COMPOSEFS_DIR="$APT_LAYER_WORKSPACE/composefs"
export PARTICLE_COMPOSEFS_SCRIPT="/usr/local/bin/composefs-alternative.sh" export APT_LAYER_COMPOSEFS_SCRIPT="/usr/local/bin/composefs-alternative.sh"
# Boot management # Boot management
export PARTICLE_BOOTC_SCRIPT="/usr/local/bin/bootc-alternative.sh" export APT_LAYER_BOOTC_SCRIPT="/usr/local/bin/bootc-alternative.sh"
export PARTICLE_BOOTUPD_SCRIPT="/usr/local/bin/bootupd-alternative.sh" export APT_LAYER_BOOTUPD_SCRIPT="/usr/local/bin/bootupd-alternative.sh"
# Transaction management # Transaction management
export PARTICLE_TRANSACTION_LOG="$PARTICLE_LOG_DIR/transactions.log" export APT_LAYER_TRANSACTION_LOG="$APT_LAYER_LOG_DIR/transactions.log"
export PARTICLE_TRANSACTION_STATE="$PARTICLE_CACHE_DIR/transaction.state" export APT_LAYER_TRANSACTION_STATE="$APT_LAYER_CACHE_DIR/transaction.state"
# Logging configuration # Logging configuration
export PARTICLE_LOG_LEVEL="INFO" export APT_LAYER_LOG_LEVEL="INFO"
export PARTICLE_LOG_FILE="$PARTICLE_LOG_DIR/apt-layer.log" export APT_LAYER_LOG_FILE="$APT_LAYER_LOG_DIR/apt-layer.log"
# Security settings # Security settings
export PARTICLE_SIGNING_ENABLED="false" export APT_LAYER_SIGNING_ENABLED="false"
export PARTICLE_VERIFY_SIGNATURES="false" export APT_LAYER_VERIFY_SIGNATURES="false"
# Container settings # Container settings
export PARTICLE_CONTAINER_RUNTIME="podman" export APT_LAYER_CONTAINER_RUNTIME="podman"
export PARTICLE_CHROOT_ENABLED="true" export APT_LAYER_CHROOT_ENABLED="true"
# Default package sources # Default package sources
export PARTICLE_DEFAULT_SOURCES="main restricted universe multiverse" export APT_LAYER_DEFAULT_SOURCES="main restricted universe multiverse"
# Performance settings # Performance settings
export PARTICLE_PARALLEL_JOBS="4" export APT_LAYER_PARALLEL_JOBS="4"
export PARTICLE_CACHE_ENABLED="true" export APT_LAYER_CACHE_ENABLED="true"
# Load configuration if it exists # Load configuration if it exists
if [[ -f "$PARTICLE_CONFIG_DIR/particle-config.sh" ]]; then if [[ -f "$APT_LAYER_CONFIG_DIR/apt-layer-config.sh" ]]; then
source "$PARTICLE_CONFIG_DIR/particle-config.sh" source "$APT_LAYER_CONFIG_DIR/apt-layer-config.sh"
fi fi
EOF EOF
log_success "Default configuration created: /usr/local/etc/particle-config.sh" "apt-layer" log_success "Default configuration created: /usr/local/etc/apt-layer-config.sh" "apt-layer"
} }
reset_particle_os_system() { reset_apt_layer_system() {
log_warning "Resetting Particle-OS system..." "apt-layer" log_warning "Resetting apt-layer system..." "apt-layer"
# Backup existing configuration # Backup existing configuration
if [[ -f "/usr/local/etc/particle-config.sh" ]]; then if [[ -f "/usr/local/etc/apt-layer-config.sh" ]]; then
cp "/usr/local/etc/particle-config.sh" "/usr/local/etc/particle-config.sh.backup.$(date +%Y%m%d_%H%M%S)" cp "/usr/local/etc/apt-layer-config.sh" "/usr/local/etc/apt-layer-config.sh.backup.$(date +%Y%m%d_%H%M%S)"
log_info "Existing configuration backed up" "apt-layer" log_info "Existing configuration backed up" "apt-layer"
fi fi
# Remove existing directories # Remove existing directories
rm -rf "/var/lib/particle-os" rm -rf "/var/lib/apt-layer"
rm -rf "/var/log/particle-os" rm -rf "/var/log/apt-layer"
rm -rf "/var/cache/particle-os" rm -rf "/var/cache/apt-layer"
# Reinitialize system # Reinitialize system
initialize_particle_os_system initialize_apt_layer_system
log_success "Particle-OS system reset successfully" "apt-layer" log_success "apt-layer system reset successfully" "apt-layer"
} }
# Transaction management functions # Transaction management functions

View file

@ -8,28 +8,37 @@
# LIVE OVERLAY SYSTEM FUNCTIONS # LIVE OVERLAY SYSTEM FUNCTIONS
# ============================================================================= # =============================================================================
# Live overlay state file (with fallbacks for when particle-config.sh is not loaded) # Live overlay state file (with fallbacks for when load_path_config() is not available)
LIVE_OVERLAY_STATE_FILE="${UBLUE_ROOT:-/var/lib/particle-os}/live-overlay.state" # These will be overridden by load_path_config() when available
LIVE_OVERLAY_MOUNT_POINT="${UBLUE_ROOT:-/var/lib/particle-os}/live-overlay/mount" LIVE_OVERLAY_STATE_FILE="${LIVE_OVERLAY_STATE_FILE:-/var/lib/apt-layer/live-overlay.state}"
LIVE_OVERLAY_PACKAGE_LOG="${UBLUE_LOG_DIR:-/var/log/particle-os}/live-overlay-packages.log" LIVE_OVERLAY_MOUNT_POINT="${LIVE_OVERLAY_MOUNT_POINT:-/var/lib/apt-layer/live-overlay/mount}"
LIVE_OVERLAY_PACKAGE_LOG="${LIVE_OVERLAY_PACKAGE_LOG:-/var/log/apt-layer/live-overlay-packages.log}"
LIVE_OVERLAY_DIR="${LIVE_OVERLAY_DIR:-/var/lib/apt-layer/live-overlay}"
LIVE_OVERLAY_UPPER_DIR="${LIVE_OVERLAY_UPPER_DIR:-/var/lib/apt-layer/live-overlay/upper}"
LIVE_OVERLAY_WORK_DIR="${LIVE_OVERLAY_WORK_DIR:-/var/lib/apt-layer/live-overlay/work}"
# Initialize live overlay system # Initialize live overlay system
init_live_overlay_system() { init_live_overlay_system() {
log_info "Initializing live overlay system" "apt-layer" log_info "Initializing live overlay system" "apt-layer"
# Load system paths if available
if command -v load_path_config >/dev/null 2>&1; then
load_path_config
fi
# Create live overlay directories # Create live overlay directories
mkdir -p "${UBLUE_LIVE_OVERLAY_DIR:-/var/lib/particle-os/live-overlay}" "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}" "${UBLUE_LIVE_WORK_DIR:-/var/lib/particle-os/live-overlay/work}" mkdir -p "$LIVE_OVERLAY_DIR" "$LIVE_OVERLAY_UPPER_DIR" "$LIVE_OVERLAY_WORK_DIR"
mkdir -p "$LIVE_OVERLAY_MOUNT_POINT" mkdir -p "$LIVE_OVERLAY_MOUNT_POINT"
# Set proper permissions (use sudo if needed) # Set proper permissions (use sudo if needed)
if [[ $EUID -eq 0 ]]; then if [[ $EUID -eq 0 ]]; then
# Running as root, use chmod directly # Running as root, use chmod directly
chmod 755 "${UBLUE_LIVE_OVERLAY_DIR:-/var/lib/particle-os/live-overlay}" 2>/dev/null || true chmod 755 "$LIVE_OVERLAY_DIR" 2>/dev/null || true
chmod 700 "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}" "${UBLUE_LIVE_WORK_DIR:-/var/lib/particle-os/live-overlay/work}" 2>/dev/null || true chmod 700 "$LIVE_OVERLAY_UPPER_DIR" "$LIVE_OVERLAY_WORK_DIR" 2>/dev/null || true
else else
# Running as regular user, use sudo # Running as regular user, use sudo
sudo chmod 755 "${UBLUE_LIVE_OVERLAY_DIR:-/var/lib/particle-os/live-overlay}" 2>/dev/null || true sudo chmod 755 "$LIVE_OVERLAY_DIR" 2>/dev/null || true
sudo chmod 700 "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}" "${UBLUE_LIVE_WORK_DIR:-/var/lib/particle-os/live-overlay/work}" 2>/dev/null || true sudo chmod 700 "$LIVE_OVERLAY_UPPER_DIR" "$LIVE_OVERLAY_WORK_DIR" 2>/dev/null || true
fi fi
# Initialize package log if it doesn't exist # Initialize package log if it doesn't exist
@ -143,7 +152,7 @@ start_live_overlay() {
# Create overlay mount # Create overlay mount
log_info "Creating overlay mount" "apt-layer" log_info "Creating overlay mount" "apt-layer"
if mount -t overlay overlay -o "lowerdir=/,upperdir=${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper},workdir=${UBLUE_LIVE_WORK_DIR:-/var/lib/particle-os/live-overlay/work}" "$LIVE_OVERLAY_MOUNT_POINT"; then if mount -t overlay overlay -o "lowerdir=/,upperdir=$LIVE_OVERLAY_UPPER_DIR,workdir=$LIVE_OVERLAY_WORK_DIR" "$LIVE_OVERLAY_MOUNT_POINT"; then
log_success "Overlay mount created successfully" "apt-layer" log_success "Overlay mount created successfully" "apt-layer"
# Mark overlay as active # Mark overlay as active
@ -229,8 +238,8 @@ get_live_overlay_status() {
log_info "Overlay mount point: $LIVE_OVERLAY_MOUNT_POINT" "apt-layer" log_info "Overlay mount point: $LIVE_OVERLAY_MOUNT_POINT" "apt-layer"
# Show overlay usage # Show overlay usage
if [[ -d "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}" ]]; then if [[ -d "$LIVE_OVERLAY_UPPER_DIR" ]]; then
local usage=$(du -sh "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}" 2>/dev/null | cut -f1 || echo "unknown") local usage=$(du -sh "$LIVE_OVERLAY_UPPER_DIR" 2>/dev/null | cut -f1 || echo "unknown")
log_info "Overlay usage: $usage" "apt-layer" log_info "Overlay usage: $usage" "apt-layer"
fi fi
@ -385,9 +394,9 @@ commit_live_overlay() {
# Check if overlay has changes # Check if overlay has changes
has_overlay_changes() { has_overlay_changes() {
if [[ -d "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}" ]]; then if [[ -d "$LIVE_OVERLAY_UPPER_DIR" ]]; then
# Check if upper directory has any content # Check if upper directory has any content
if [[ -n "$(find "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}" -mindepth 1 -maxdepth 1 2>/dev/null)" ]]; then if [[ -n "$(find "$LIVE_OVERLAY_UPPER_DIR" -mindepth 1 -maxdepth 1 2>/dev/null)" ]]; then
return 0 return 0
fi fi
fi fi
@ -401,12 +410,12 @@ create_layer_from_overlay() {
local message="$2" local message="$2"
# Create temporary directory for layer # Create temporary directory for layer
local temp_layer_dir="${UBLUE_TEMP_DIR:-/var/lib/particle-os/temp}/live-layer-${layer_name}" local temp_layer_dir="${TEMP_DIR:-/tmp/apt-layer}/live-layer-${layer_name}"
mkdir -p "$temp_layer_dir" mkdir -p "$temp_layer_dir"
# Copy overlay changes to temporary directory # Copy overlay changes to temporary directory
log_info "Copying overlay changes to temporary layer" "apt-layer" log_info "Copying overlay changes to temporary layer" "apt-layer"
if ! cp -a "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}"/* "$temp_layer_dir/" 2>/dev/null; then if ! cp -a "$LIVE_OVERLAY_UPPER_DIR"/* "$temp_layer_dir/" 2>/dev/null; then
log_error "Failed to copy overlay changes" "apt-layer" log_error "Failed to copy overlay changes" "apt-layer"
rm -rf "$temp_layer_dir" rm -rf "$temp_layer_dir"
return 1 return 1
@ -440,10 +449,10 @@ create_composefs_layer() {
fi fi
# Fallback: create simple squashfs layer # Fallback: create simple squashfs layer
local layer_file="${UBLUE_BUILD_DIR:-/var/lib/particle-os/build}/${layer_name}.squashfs" local layer_file="${BUILD_DIR:-/var/lib/apt-layer/build}/${layer_name}.squashfs"
mkdir -p "$(dirname "$layer_file")" mkdir -p "$(dirname "$layer_file")"
if mksquashfs "$source_dir" "$layer_file" -comp "${UBLUE_SQUASHFS_COMPRESSION:-xz}" -b "${UBLUE_SQUASHFS_BLOCK_SIZE:-1M}"; then if mksquashfs "$source_dir" "$layer_file" -comp "${SQUASHFS_COMPRESSION:-xz}" -b "${SQUASHFS_BLOCK_SIZE:-1M}"; then
log_success "Created squashfs layer: $layer_file" "apt-layer" log_success "Created squashfs layer: $layer_file" "apt-layer"
return 0 return 0
else else
@ -499,7 +508,7 @@ clean_live_overlay() {
fi fi
# Clean up overlay directories # Clean up overlay directories
rm -rf "${UBLUE_LIVE_UPPER_DIR:-/var/lib/particle-os/live-overlay/upper}"/* "${UBLUE_LIVE_WORK_DIR:-/var/lib/particle-os/live-overlay/work}"/* 2>/dev/null rm -rf "$LIVE_OVERLAY_UPPER_DIR"/* "$LIVE_OVERLAY_WORK_DIR"/* 2>/dev/null
# Clean up package log # Clean up package log
rm -f "$LIVE_OVERLAY_PACKAGE_LOG" rm -f "$LIVE_OVERLAY_PACKAGE_LOG"
@ -517,7 +526,7 @@ clean_live_overlay() {
# Initialize live overlay system on script startup # Initialize live overlay system on script startup
init_live_overlay_on_startup() { init_live_overlay_on_startup() {
# Only initialize if not already done # Only initialize if not already done
if [[ ! -d "${UBLUE_LIVE_OVERLAY_DIR:-/var/lib/particle-os/live-overlay}" ]]; then if [[ ! -d "$LIVE_OVERLAY_DIR" ]]; then
init_live_overlay_system init_live_overlay_system
fi fi
} }

View file

@ -0,0 +1,341 @@
#!/bin/bash
# System Initialization and Path Management for apt-layer
# This scriptlet handles system initialization, path management, and directory creation
# Load path configuration
load_path_config() {
local config_file="/usr/local/etc/apt-layer/paths.json"
if [[ ! -f "$config_file" ]]; then
log_error "Path configuration file not found: $config_file" "apt-layer"
return 1
fi
# Load configuration using jq
if ! command -v jq >/dev/null 2>&1; then
log_error "jq is required for path configuration loading" "apt-layer"
return 1
fi
# Export main directory paths
export APT_LAYER_WORKSPACE=$(jq -r '.apt_layer_paths.main_directories.workspace.path' "$config_file")
export APT_LAYER_LOG_DIR=$(jq -r '.apt_layer_paths.main_directories.logs.path' "$config_file")
export APT_LAYER_CACHE_DIR=$(jq -r '.apt_layer_paths.main_directories.cache.path' "$config_file")
# Export workspace subdirectory paths
export BUILD_DIR=$(jq -r '.apt_layer_paths.workspace_subdirectories.build.path' "$config_file")
export LIVE_OVERLAY_DIR=$(jq -r '.apt_layer_paths.workspace_subdirectories.live_overlay.path' "$config_file")
export COMPOSEFS_DIR=$(jq -r '.apt_layer_paths.workspace_subdirectories.composefs.path' "$config_file")
export OSTREE_COMMITS_DIR=$(jq -r '.apt_layer_paths.workspace_subdirectories.ostree_commits.path' "$config_file")
export DEPLOYMENTS_DIR=$(jq -r '.apt_layer_paths.workspace_subdirectories.deployments.path' "$config_file")
export HISTORY_DIR=$(jq -r '.apt_layer_paths.workspace_subdirectories.history.path' "$config_file")
export BOOTLOADER_STATE_DIR=$(jq -r '.apt_layer_paths.workspace_subdirectories.bootloader.path' "$config_file")
export TRANSACTION_STATE=$(jq -r '.apt_layer_paths.workspace_subdirectories.transaction_state.path' "$config_file")
# Export file paths
export DEPLOYMENT_DB=$(jq -r '.apt_layer_paths.files.deployment_db.path' "$config_file")
export CURRENT_DEPLOYMENT_FILE=$(jq -r '.apt_layer_paths.files.current_deployment.path' "$config_file")
export PENDING_DEPLOYMENT_FILE=$(jq -r '.apt_layer_paths.files.pending_deployment.path' "$config_file")
export TRANSACTION_LOG=$(jq -r '.apt_layer_paths.files.transaction_log.path' "$config_file")
log_debug "Path configuration loaded from: $config_file" "apt-layer"
return 0
}
# Initialize apt-layer system directories
initialize_apt_layer_system() {
log_info "Initializing apt-layer system directories..." "apt-layer"
# Load path configuration
if ! load_path_config; then
log_error "Failed to load path configuration" "apt-layer"
return 1
fi
# Create main directories
local main_dirs=("$APT_LAYER_WORKSPACE" "$APT_LAYER_LOG_DIR" "$APT_LAYER_CACHE_DIR")
for dir in "${main_dirs[@]}"; do
if [[ ! -d "$dir" ]]; then
mkdir -p "$dir"
chmod 755 "$dir"
chown root:root "$dir"
log_debug "Created directory: $dir" "apt-layer"
fi
done
# Create workspace subdirectories
local subdirs=(
"$BUILD_DIR"
"$LIVE_OVERLAY_DIR"
"$COMPOSEFS_DIR"
"$OSTREE_COMMITS_DIR"
"$DEPLOYMENTS_DIR"
"$HISTORY_DIR"
"$BOOTLOADER_STATE_DIR"
"$TRANSACTION_STATE"
)
for dir in "${subdirs[@]}"; do
if [[ ! -d "$dir" ]]; then
mkdir -p "$dir"
chmod 755 "$dir"
chown root:root "$dir"
log_debug "Created subdirectory: $dir" "apt-layer"
fi
done
# Create live overlay subdirectories
local overlay_dirs=(
"$LIVE_OVERLAY_DIR/upper"
"$LIVE_OVERLAY_DIR/work"
"$LIVE_OVERLAY_DIR/mount"
)
for dir in "${overlay_dirs[@]}"; do
if [[ ! -d "$dir" ]]; then
mkdir -p "$dir"
chmod 700 "$dir"
chown root:root "$dir"
log_debug "Created overlay directory: $dir" "apt-layer"
fi
done
# Initialize deployment database if it doesn't exist
if [[ ! -f "$DEPLOYMENT_DB" ]]; then
echo '{"deployments": {}, "current": null, "history": []}' > "$DEPLOYMENT_DB"
chmod 644 "$DEPLOYMENT_DB"
chown root:root "$DEPLOYMENT_DB"
log_debug "Initialized deployment database: $DEPLOYMENT_DB" "apt-layer"
fi
log_success "apt-layer system directories initialized" "apt-layer"
return 0
}
# Reinitialize apt-layer system (force recreation)
reinitialize_apt_layer_system() {
log_info "Reinitializing apt-layer system (force recreation)..." "apt-layer"
# Load path configuration
if ! load_path_config; then
log_error "Failed to load path configuration" "apt-layer"
return 1
fi
# Remove existing directories
local dirs_to_remove=(
"$APT_LAYER_WORKSPACE"
"$APT_LAYER_LOG_DIR"
"$APT_LAYER_CACHE_DIR"
)
for dir in "${dirs_to_remove[@]}"; do
if [[ -d "$dir" ]]; then
rm -rf "$dir"
log_debug "Removed directory: $dir" "apt-layer"
fi
done
# Reinitialize
if initialize_apt_layer_system; then
log_success "apt-layer system reinitialized successfully" "apt-layer"
return 0
else
log_error "Failed to reinitialize apt-layer system" "apt-layer"
return 1
fi
}
# Remove apt-layer system (cleanup)
remove_apt_layer_system() {
log_info "Removing apt-layer system (cleanup)..." "apt-layer"
# Load path configuration
if ! load_path_config; then
log_error "Failed to load path configuration" "apt-layer"
return 1
fi
# Stop any running live overlay
if is_live_overlay_active; then
log_warning "Live overlay is active, stopping it first..." "apt-layer"
stop_live_overlay
fi
# Remove all apt-layer directories
local dirs_to_remove=(
"$APT_LAYER_WORKSPACE"
"$APT_LAYER_LOG_DIR"
"$APT_LAYER_CACHE_DIR"
)
for dir in "${dirs_to_remove[@]}"; do
if [[ -d "$dir" ]]; then
rm -rf "$dir"
log_debug "Removed directory: $dir" "apt-layer"
fi
done
log_success "apt-layer system removed successfully" "apt-layer"
return 0
}
# Show apt-layer system status
show_apt_layer_system_status() {
log_info "apt-layer System Status:" "apt-layer"
# Load path configuration
if ! load_path_config; then
log_error "Failed to load path configuration" "apt-layer"
return 1
fi
echo "=== Main Directories ==="
local main_dirs=(
["Workspace"]="$APT_LAYER_WORKSPACE"
["Logs"]="$APT_LAYER_LOG_DIR"
["Cache"]="$APT_LAYER_CACHE_DIR"
)
for name in "${!main_dirs[@]}"; do
local dir="${main_dirs[$name]}"
if [[ -d "$dir" ]]; then
local size=$(du -sh "$dir" 2>/dev/null | cut -f1)
local perms=$(stat -c "%a" "$dir" 2>/dev/null)
echo "$name: $dir ($size, perms: $perms)"
else
echo "$name: $dir (not found)"
fi
done
echo ""
echo "=== Workspace Subdirectories ==="
local subdirs=(
["Build"]="$BUILD_DIR"
["Live Overlay"]="$LIVE_OVERLAY_DIR"
["ComposeFS"]="$COMPOSEFS_DIR"
["OSTree Commits"]="$OSTREE_COMMITS_DIR"
["Deployments"]="$DEPLOYMENTS_DIR"
["History"]="$HISTORY_DIR"
["Bootloader"]="$BOOTLOADER_STATE_DIR"
["Transaction State"]="$TRANSACTION_STATE"
)
for name in "${!subdirs[@]}"; do
local dir="${subdirs[$name]}"
if [[ -d "$dir" ]]; then
local count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
echo "$name: $dir ($count files)"
else
echo "$name: $dir (not found)"
fi
done
echo ""
echo "=== System Files ==="
local files=(
["Deployment DB"]="$DEPLOYMENT_DB"
["Current Deployment"]="$CURRENT_DEPLOYMENT_FILE"
["Pending Deployment"]="$PENDING_DEPLOYMENT_FILE"
["Transaction Log"]="$TRANSACTION_LOG"
)
for name in "${!files[@]}"; do
local file="${files[$name]}"
if [[ -f "$file" ]]; then
local size=$(stat -c "%s" "$file" 2>/dev/null)
echo "$name: $file ($size bytes)"
else
echo "$name: $file (not found)"
fi
done
echo ""
echo "=== Live Overlay Status ==="
if is_live_overlay_active; then
echo "🟡 Live overlay is ACTIVE"
echo " Mount point: $LIVE_OVERLAY_DIR/mount"
echo " Upper dir: $LIVE_OVERLAY_DIR/upper"
echo " Work dir: $LIVE_OVERLAY_DIR/work"
else
echo "🟢 Live overlay is INACTIVE"
fi
return 0
}
# Validate path configuration
validate_path_config() {
log_debug "Validating path configuration..." "apt-layer"
local config_file="/usr/local/etc/apt-layer/paths.json"
if [[ ! -f "$config_file" ]]; then
log_error "Path configuration file not found: $config_file" "apt-layer"
return 1
fi
# Validate JSON syntax
if ! jq empty "$config_file" 2>/dev/null; then
log_error "Invalid JSON in path configuration file" "apt-layer"
return 1
fi
# Validate required paths exist in config
local required_paths=(
".apt_layer_paths.main_directories.workspace.path"
".apt_layer_paths.main_directories.logs.path"
".apt_layer_paths.main_directories.cache.path"
)
for path in "${required_paths[@]}"; do
if ! jq -e "$path" "$config_file" >/dev/null 2>&1; then
log_error "Required path not found in config: $path" "apt-layer"
return 1
fi
done
log_debug "Path configuration validation passed" "apt-layer"
return 0
}
# Handle system initialization commands
handle_system_init_commands() {
case "$1" in
"--init")
if initialize_apt_layer_system; then
log_success "apt-layer system initialized successfully" "apt-layer"
return 0
else
log_error "Failed to initialize apt-layer system" "apt-layer"
return 1
fi
;;
"--reinit")
if reinitialize_apt_layer_system; then
log_success "apt-layer system reinitialized successfully" "apt-layer"
return 0
else
log_error "Failed to reinitialize apt-layer system" "apt-layer"
return 1
fi
;;
"--rm-init")
if remove_apt_layer_system; then
log_success "apt-layer system removed successfully" "apt-layer"
return 0
else
log_error "Failed to remove apt-layer system" "apt-layer"
return 1
fi
;;
"--status")
show_apt_layer_system_status
return $?
;;
*)
return 1
;;
esac
}

View file

@ -54,8 +54,11 @@ Image Management:
--oci-import Import OCI image --oci-import Import OCI image
System Management: System Management:
--init Initialize Particle-OS system --init Initialize apt-layer system
--reset Reset Particle-OS system --reinit Reinitialize apt-layer system (force recreation)
--rm-init Remove apt-layer system (cleanup)
--reset Reset apt-layer system
--status Show apt-layer system status
--help-full Show detailed help --help-full Show detailed help
--examples Show usage examples --examples Show usage examples
@ -78,7 +81,7 @@ EOF
# Show full detailed usage information # Show full detailed usage information
show_full_usage() { show_full_usage() {
cat << 'EOF' cat << 'EOF'
Particle-OS apt-layer Tool - Enhanced with Container Support and LIVE SYSTEM LAYERING apt-layer Tool - Enhanced with Container Support and LIVE SYSTEM LAYERING
Like rpm-ostree + Vanilla OS Apx for Ubuntu/Debian, now ComposeFS-based Like rpm-ostree + Vanilla OS Apx for Ubuntu/Debian, now ComposeFS-based
BASIC LAYER CREATION: BASIC LAYER CREATION:
@ -178,14 +181,14 @@ IMAGE MANAGEMENT:
SYSTEM MANAGEMENT: SYSTEM MANAGEMENT:
apt-layer --init apt-layer --init
# Initialize Particle-OS system # Initialize apt-layer system
apt-layer --reset apt-layer --reset
# Reset Particle-OS system # Reset apt-layer system
EXAMPLES: EXAMPLES:
apt-layer particle-os/base/24.04 particle-os/gaming/24.04 steam wine apt-layer ubuntu-base/24.04 gaming/24.04 steam wine
apt-layer --container particle-os/base/24.04 particle-os/dev/24.04 vscode git apt-layer --container ubuntu-base/24.04 dev/24.04 vscode git
apt-layer --dpkg-install curl wget apt-layer --dpkg-install curl wget
apt-layer --live-install firefox apt-layer --live-install firefox
apt-layer install steam wine apt-layer install steam wine
@ -221,8 +224,8 @@ BASIC LAYER CREATION:
# Update packages with rollback capability and backup creation # Update packages with rollback capability and backup creation
Examples: Examples:
apt-layer particle-os/base/24.04 particle-os/gaming/24.04 steam wine apt-layer ubuntu-base/24.04 gaming/24.04 steam wine
apt-layer --container particle-os/base/24.04 particle-os/dev/24.04 vscode git apt-layer --container ubuntu-base/24.04 dev/24.04 vscode git
apt-layer --dpkg-install curl wget apt-layer --dpkg-install curl wget
apt-layer --advanced-install firefox apt-layer --advanced-install firefox
EOF EOF
@ -761,20 +764,50 @@ main() {
check_incomplete_transactions check_incomplete_transactions
# Check if system needs initialization (skip for help and initialization commands) # Check if system needs initialization (skip for help and initialization commands)
if [[ "${1:-}" != "--init" && "${1:-}" != "--reset" && "${1:-}" != "--help" && "${1:-}" != "-h" && "${1:-}" != "--help-full" && "${1:-}" != "--examples" && "${1:-}" != "--version" ]]; then if [[ "${1:-}" != "--init" && "${1:-}" != "--reinit" && "${1:-}" != "--rm-init" && "${1:-}" != "--reset" && "${1:-}" != "--status" && "${1:-}" != "--help" && "${1:-}" != "-h" && "${1:-}" != "--help-full" && "${1:-}" != "--examples" && "${1:-}" != "--version" ]]; then
check_initialization_needed check_initialization_needed
fi fi
# Parse command line arguments first (before dependency checks) # Parse command line arguments first (before dependency checks)
case "${1:-}" in case "${1:-}" in
--init) --init)
# Initialize Particle-OS system # Initialize apt-layer system
initialize_particle_os_system initialize_apt_layer_system
exit 0
;;
--reinit)
# Reinitialize apt-layer system (force recreation)
if command -v reinitialize_apt_layer_system >/dev/null 2>&1; then
reinitialize_apt_layer_system
else
log_error "Reinit function not available" "apt-layer"
exit 1
fi
exit 0
;;
--rm-init)
# Remove apt-layer system (cleanup)
if command -v remove_apt_layer_system >/dev/null 2>&1; then
remove_apt_layer_system
else
log_error "Remove init function not available" "apt-layer"
exit 1
fi
exit 0
;;
--status)
# Show apt-layer system status
if command -v show_apt_layer_system_status >/dev/null 2>&1; then
show_apt_layer_system_status
else
log_error "Status function not available" "apt-layer"
exit 1
fi
exit 0 exit 0
;; ;;
--reset) --reset)
# Reset Particle-OS system # Reset apt-layer system
reset_particle_os_system reset_apt_layer_system
exit 0 exit 0
;; ;;
--help|-h) --help|-h)

View file

@ -0,0 +1,337 @@
#!/bin/bash
# apt-layer Installation Script
# This script installs the apt-layer tool, its dependencies, creates necessary directories and files,
# and sets up the system to use apt-layer. If already installed, it will update the tool.
#
# Usage:
# ./install-apt-layer.sh # Install or update apt-layer
# ./install-apt-layer.sh --uninstall # Remove apt-layer and all its files
# ./install-apt-layer.sh --reinstall # Remove and reinstall (reset to default state)
# ./install-apt-layer.sh --help # Show this help message
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_header() {
echo -e "${BLUE}================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}================================${NC}"
}
# Function to show help
show_help() {
cat << 'EOF'
apt-layer Installation Script
This script installs the apt-layer tool, its dependencies, creates necessary directories and files,
and sets up the system to use apt-layer.
Usage:
./install-apt-layer.sh # Install or update apt-layer
./install-apt-layer.sh --uninstall # Remove apt-layer and all its files
./install-apt-layer.sh --reinstall # Remove and reinstall (reset to default state)
./install-apt-layer.sh --help # Show this help message
What this script does:
- Downloads the latest apt-layer.sh from the repository
- Installs required dependencies (jq, dos2unix, etc.)
- Creates necessary directories (/var/lib/apt-layer, /var/log/apt-layer, etc.)
- Sets up configuration files
- Makes apt-layer executable and available system-wide
- Initializes the apt-layer system
Dependencies:
- curl or wget (for downloading)
- jq (for JSON processing)
- dos2unix (for Windows line ending conversion)
- sudo (for system installation)
EOF
}
# Function to check if running as root
check_root() {
if [[ $EUID -eq 0 ]]; then
print_error "This script should not be run as root. Use sudo for specific commands."
exit 1
fi
}
# Function to check dependencies
check_dependencies() {
print_status "Checking dependencies..."
local missing_deps=()
# Check for curl or wget
if ! command -v curl >/dev/null 2>&1 && ! command -v wget >/dev/null 2>&1; then
missing_deps+=("curl or wget")
fi
# Check for jq
if ! command -v jq >/dev/null 2>&1; then
missing_deps+=("jq")
fi
# Check for dos2unix
if ! command -v dos2unix >/dev/null 2>&1; then
missing_deps+=("dos2unix")
fi
if [[ ${#missing_deps[@]} -gt 0 ]]; then
print_error "Missing required dependencies: ${missing_deps[*]}"
print_status "Installing missing dependencies..."
# Try to install dependencies
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update
sudo apt-get install -y "${missing_deps[@]}"
elif command -v dnf >/dev/null 2>&1; then
sudo dnf install -y "${missing_deps[@]}"
elif command -v yum >/dev/null 2>&1; then
sudo yum install -y "${missing_deps[@]}"
else
print_error "Could not automatically install dependencies. Please install manually:"
print_error " ${missing_deps[*]}"
exit 1
fi
fi
print_success "All dependencies satisfied"
}
# Function to download apt-layer
download_apt_layer() {
print_status "Downloading apt-layer..."
local download_url="https://git.raines.xyz/robojerk/particle-os-tools/raw/branch/main/apt-layer.sh"
local temp_file="/tmp/apt-layer.sh"
# Download using curl or wget
if command -v curl >/dev/null 2>&1; then
if curl -L -o "$temp_file" "$download_url"; then
print_success "Downloaded apt-layer using curl"
else
print_error "Failed to download apt-layer using curl"
return 1
fi
elif command -v wget >/dev/null 2>&1; then
if wget -O "$temp_file" "$download_url"; then
print_success "Downloaded apt-layer using wget"
else
print_error "Failed to download apt-layer using wget"
return 1
fi
else
print_error "No download tool available (curl or wget)"
return 1
fi
# Verify the downloaded file
if [[ ! -f "$temp_file" ]] || [[ ! -s "$temp_file" ]]; then
print_error "Downloaded file is empty or missing"
return 1
fi
# Convert line endings if needed
if command -v dos2unix >/dev/null 2>&1; then
dos2unix "$temp_file" 2>/dev/null || true
fi
# Make executable
chmod +x "$temp_file"
print_success "apt-layer downloaded and prepared"
}
# Function to install apt-layer
install_apt_layer() {
print_status "Installing apt-layer..."
local temp_file="/tmp/apt-layer.sh"
local install_dir="/usr/local/bin"
local config_dir="/usr/local/etc/apt-layer"
# Create installation directory if it doesn't exist
sudo mkdir -p "$install_dir"
# Install apt-layer
sudo cp "$temp_file" "$install_dir/apt-layer"
sudo chmod +x "$install_dir/apt-layer"
# Create configuration directory
sudo mkdir -p "$config_dir"
# Create paths.json configuration
sudo tee "$config_dir/paths.json" >/dev/null << 'PATHS_JSON_PLACEHOLDER'
PATHS_JSON_PLACEHOLDER
# Set proper permissions
sudo chmod 644 "$config_dir/paths.json"
# Initialize apt-layer system
print_status "Initializing apt-layer system..."
if sudo apt-layer --init; then
print_success "apt-layer system initialized"
else
print_warning "apt-layer system initialization failed, but installation completed"
fi
# Clean up temporary file
rm -f "$temp_file"
print_success "apt-layer installed successfully"
print_status "You can now use 'apt-layer --help' to see available commands"
}
# Function to uninstall apt-layer
uninstall_apt_layer() {
print_status "Uninstalling apt-layer..."
local install_dir="/usr/local/bin"
local config_dir="/usr/local/etc/apt-layer"
# Remove apt-layer binary
if [[ -f "$install_dir/apt-layer" ]]; then
sudo rm -f "$install_dir/apt-layer"
print_status "Removed apt-layer binary"
fi
# Remove configuration directory
if [[ -d "$config_dir" ]]; then
sudo rm -rf "$config_dir"
print_status "Removed configuration directory"
fi
# Remove apt-layer directories (optional - ask user)
if [[ -d "/var/lib/apt-layer" ]] || [[ -d "/var/log/apt-layer" ]] || [[ -d "/var/cache/apt-layer" ]]; then
echo -e "${YELLOW}Do you want to remove all apt-layer data directories? (y/N)${NC}"
read -r response
if [[ "$response" =~ ^[Yy]$ ]]; then
sudo rm -rf "/var/lib/apt-layer" "/var/log/apt-layer" "/var/cache/apt-layer"
print_status "Removed all apt-layer data directories"
else
print_status "Keeping apt-layer data directories"
fi
fi
print_success "apt-layer uninstalled successfully"
}
# Function to reinstall apt-layer
reinstall_apt_layer() {
print_status "Reinstalling apt-layer..."
# Uninstall first
uninstall_apt_layer
# Install again
install_apt_layer
print_success "apt-layer reinstalled successfully"
}
# Function to check if apt-layer is installed
is_apt_layer_installed() {
command -v apt-layer >/dev/null 2>&1
}
# Function to check if apt-layer is up to date
check_for_updates() {
print_status "Checking for updates..."
if ! is_apt_layer_installed; then
return 0 # Not installed, so no update needed
fi
# For now, we'll always download the latest version
# In the future, this could check version numbers or timestamps
return 1 # Update needed
}
# Main installation function
main_install() {
print_header "apt-layer Installation"
# Check if running as root
check_root
# Check dependencies
check_dependencies
# Check if already installed
if is_apt_layer_installed; then
print_status "apt-layer is already installed"
if check_for_updates; then
print_status "apt-layer is up to date"
return 0
else
print_status "Updating apt-layer..."
fi
else
print_status "Installing apt-layer..."
fi
# Download and install
if download_apt_layer && install_apt_layer; then
print_success "apt-layer installation completed successfully"
print_status "You can now use 'apt-layer --help' to see available commands"
return 0
else
print_error "apt-layer installation failed"
return 1
fi
}
# Main function
main() {
case "${1:-}" in
--help|-h)
show_help
exit 0
;;
--uninstall)
print_header "apt-layer Uninstallation"
uninstall_apt_layer
exit 0
;;
--reinstall)
print_header "apt-layer Reinstallation"
reinstall_apt_layer
exit 0
;;
"")
main_install
exit $?
;;
*)
print_error "Unknown option: $1"
show_help
exit 1
;;
esac
}
# Run main function with all arguments
main "$@"

183
test-path-management.sh Normal file
View file

@ -0,0 +1,183 @@
#!/bin/bash
# Test script for Particle-OS path management and initialization
# Tests the new configuration-based path system and initialization commands
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Test configuration
APT_LAYER_SCRIPT="./apt-layer.sh"
TEST_DIR="/tmp/particle-os-test-$$"
# Function to cleanup test environment
cleanup() {
log_info "Cleaning up test environment..."
if [[ -d "$TEST_DIR" ]]; then
rm -rf "$TEST_DIR"
fi
# Remove any test directories that might have been created
sudo rm -rf "/var/lib/apt-layer-test" 2>/dev/null || true
sudo rm -rf "/var/log/apt-layer-test" 2>/dev/null || true
sudo rm -rf "/var/cache/apt-layer-test" 2>/dev/null || true
sudo rm -rf "/usr/local/etc/apt-layer-test" 2>/dev/null || true
}
# Function to run test
run_test() {
local test_name="$1"
local test_command="$2"
local expected_exit_code="${3:-0}"
log_info "Running test: $test_name"
log_info "Command: $test_command"
if eval "$test_command"; then
if [[ $? -eq $expected_exit_code ]]; then
log_success "Test passed: $test_name"
return 0
else
log_error "Test failed: $test_name (unexpected exit code: $?)"
return 1
fi
else
if [[ $? -eq $expected_exit_code ]]; then
log_success "Test passed: $test_name (expected failure)"
return 0
else
log_error "Test failed: $test_name (unexpected exit code: $?)"
return 1
fi
fi
}
# Main test execution
main() {
log_info "Starting Particle-OS path management tests"
# Check if apt-layer.sh exists
if [[ ! -f "$APT_LAYER_SCRIPT" ]]; then
log_error "apt-layer.sh not found. Please run from the tools directory."
exit 1
fi
# Make sure it's executable
chmod +x "$APT_LAYER_SCRIPT"
# Test 1: Check if new commands are available
log_info "=== Test 1: Check new initialization commands ==="
run_test "Help command shows new options" "$APT_LAYER_SCRIPT --help | grep -E '(--init|--reinit|--rm-init|--status)'" 0
# Test 2: Test system status (should work without initialization)
log_info "=== Test 2: Test system status command ==="
run_test "System status command" "$APT_LAYER_SCRIPT --status" 0
# Test 3: Test initialization
log_info "=== Test 3: Test system initialization ==="
run_test "System initialization" "sudo $APT_LAYER_SCRIPT --init" 0
# Test 4: Verify directories were created
log_info "=== Test 4: Verify directories were created ==="
local dirs=(
"/var/lib/apt-layer"
"/var/log/apt-layer"
"/var/cache/apt-layer"
"/usr/local/etc/apt-layer"
"/var/lib/apt-layer/live-overlay"
"/var/lib/apt-layer/deployments"
"/var/lib/apt-layer/backups"
"/var/lib/apt-layer/images"
"/var/lib/apt-layer/layers"
"/etc/apt-layer/bootloader"
"/etc/apt-layer/kargs"
"/var/lib/apt-layer/containers"
"/var/lib/apt-layer/oci-images"
"/var/lib/apt-layer/exports"
"/var/lib/apt-layer/composefs"
"/var/cache/apt-layer/composefs"
)
for dir in "${dirs[@]}"; do
if [[ -d "$dir" ]]; then
log_success "Directory exists: $dir"
else
log_error "Directory missing: $dir"
return 1
fi
done
# Test 5: Test system status after initialization
log_info "=== Test 5: Test system status after initialization ==="
run_test "System status after init" "$APT_LAYER_SCRIPT --status" 0
# Test 6: Test reinitialization
log_info "=== Test 6: Test system reinitialization ==="
run_test "System reinitialization" "sudo $APT_LAYER_SCRIPT --reinit" 0
# Test 7: Verify directories still exist after reinit
log_info "=== Test 7: Verify directories after reinit ==="
for dir in "${dirs[@]}"; do
if [[ -d "$dir" ]]; then
log_success "Directory exists after reinit: $dir"
else
log_error "Directory missing after reinit: $dir"
return 1
fi
done
# Test 8: Test live overlay start (should work now)
log_info "=== Test 8: Test live overlay start ==="
run_test "Live overlay start" "sudo $APT_LAYER_SCRIPT --live-overlay start" 0
# Test 9: Test live overlay status
log_info "=== Test 9: Test live overlay status ==="
run_test "Live overlay status" "$APT_LAYER_SCRIPT --live-overlay status" 0
# Test 10: Test live overlay stop
log_info "=== Test 10: Test live overlay stop ==="
run_test "Live overlay stop" "sudo $APT_LAYER_SCRIPT --live-overlay stop" 0
# Test 11: Test system removal (cleanup)
log_info "=== Test 11: Test system removal ==="
run_test "System removal" "sudo $APT_LAYER_SCRIPT --rm-init" 0
# Test 12: Verify directories were removed
log_info "=== Test 12: Verify directories were removed ==="
for dir in "${dirs[@]}"; do
if [[ ! -d "$dir" ]]; then
log_success "Directory removed: $dir"
else
log_warning "Directory still exists: $dir"
fi
done
log_success "All Particle-OS path management tests completed successfully!"
}
# Trap to ensure cleanup on exit
trap cleanup EXIT
# Run main test
main "$@"