Cleanup and archived things not in scope
This commit is contained in:
parent
eb473b1f37
commit
5ac26d0800
39 changed files with 1433 additions and 27827 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
|
@ -20,7 +20,10 @@ tmp/
|
|||
*.tmp
|
||||
|
||||
# AI-generated fix scripts and temporary work
|
||||
.scratchpad/
|
||||
scratchpad/
|
||||
scratchpad/
|
||||
|
||||
|
||||
# Compiled scripts (these are generated from source)
|
||||
# Uncomment if you want to exclude compiled scripts
|
||||
|
|
|
|||
|
|
@ -1,143 +0,0 @@
|
|||
# 🎉 Major Milestone: Official ComposeFS Integration Complete
|
||||
|
||||
**Date**: January 27, 2025
|
||||
**Status**: ✅ **COMPLETED**
|
||||
|
||||
## 🎯 What Was Accomplished
|
||||
|
||||
### **Official ComposeFS Tools Integration**
|
||||
- ✅ **Official ComposeFS Tools Working**: Successfully tested and functional
|
||||
- ✅ **Automatic Backend Selection**: Particle-OS detects and uses official tools when available
|
||||
- ✅ **Fallback Support**: Alternative implementation available if needed
|
||||
- ✅ **Production Ready**: Native C implementation with kernel optimizations
|
||||
|
||||
### **Alternative Implementation Archived**
|
||||
- ✅ **composefs-alternative.sh ARCHIVED**: Moved to `archive/composefs-alternative.sh`
|
||||
- ✅ **Archive Notice Created**: `archive/COMPOSEFS_ARCHIVE_NOTICE.md` explains the transition
|
||||
- ✅ **Documentation Updated**: All documentation reflects official tool usage
|
||||
- ✅ **Clean Codebase**: Removed redundant implementation from main directory
|
||||
|
||||
## 🚀 Benefits Achieved
|
||||
|
||||
### **Production Readiness**
|
||||
- **Official Tools**: Uses `mkcomposefs` and `mount.composefs` from upstream
|
||||
- **Standards Compliance**: Full compliance with official ComposeFS specification
|
||||
- **Security**: fs-verity support for filesystem integrity verification
|
||||
- **Performance**: Page cache sharing and EROFS integration
|
||||
|
||||
### **Ecosystem Integration**
|
||||
- **OSTree Integration**: Better integration with OSTree for atomic updates
|
||||
- **Podman Support**: Enhanced integration with Podman's ComposeFS support
|
||||
- **Flatpak Compatibility**: Prepared for future Flatpak ComposeFS support
|
||||
- **Container Runtime**: Better integration with modern container workflows
|
||||
|
||||
### **Maintenance Benefits**
|
||||
- **Upstream Maintained**: Official tools maintained by Red Hat and containers community
|
||||
- **Reduced Maintenance**: No need to maintain custom ComposeFS implementation
|
||||
- **Bug Fixes**: Automatic benefit from upstream bug fixes and improvements
|
||||
- **Feature Updates**: Access to new features as they're added upstream
|
||||
|
||||
## 📊 Technical Details
|
||||
|
||||
### **Package Status**
|
||||
- **Repository**: https://salsa.debian.org/debian/composefs/
|
||||
- **Maintainer**: Roland Hieber (rhi@pengutronix.de)
|
||||
- **Upstream**: https://github.com/containers/composefs
|
||||
- **License**: BSD 2-Clause "Simplified" License
|
||||
- **Status**: ⏳ **READY FOR UPLOAD - AWAITING SPONSORSHIP** (Debian Bug #1064457)
|
||||
|
||||
### **Integration Features**
|
||||
- **Automatic Detection**: Particle-OS automatically detects official tools
|
||||
- **Graceful Fallback**: Falls back to alternative implementation if needed
|
||||
- **Source Installation**: `--official-install` command for source builds
|
||||
- **Package Installation**: Will support `sudo apt install composefs-tools` when available
|
||||
|
||||
### **Usage Examples**
|
||||
```bash
|
||||
# Install official tools (when available)
|
||||
sudo apt install composefs-tools
|
||||
|
||||
# Or install from source
|
||||
composefs-alternative.sh --official-install
|
||||
|
||||
# Check status
|
||||
composefs-alternative.sh official-status
|
||||
|
||||
# Use official tools automatically
|
||||
composefs-alternative.sh create my-image /path/to/base
|
||||
composefs-alternative.sh mount my-image /mnt/point
|
||||
```
|
||||
|
||||
## 🔄 Migration Path
|
||||
|
||||
### **For Users**
|
||||
1. **Automatic**: Particle-OS automatically detects and uses official tools
|
||||
2. **Manual Installation**: Install official tools when available in repositories
|
||||
3. **Source Build**: Use `--official-install` for immediate access
|
||||
4. **Fallback**: Alternative implementation remains available if needed
|
||||
|
||||
### **For Developers**
|
||||
1. **Updated Documentation**: All docs reflect official tool usage
|
||||
2. **Archived Implementation**: Alternative implementation preserved in archive
|
||||
3. **Testing**: Official tools tested and working
|
||||
4. **Future Development**: Focus on official tool integration and enhancements
|
||||
|
||||
## 📈 Impact on Particle-OS
|
||||
|
||||
### **Architecture Validation**
|
||||
- **Approach Confirmed**: Official ComposeFS integration validates Particle-OS architecture
|
||||
- **Standards Compliance**: Full compliance with official ComposeFS specification
|
||||
- **Ecosystem Alignment**: Better alignment with container ecosystem standards
|
||||
- **Future Proofing**: Positioned for future ComposeFS developments
|
||||
|
||||
### **User Experience**
|
||||
- **Simplified**: Users get official, production-ready tools
|
||||
- **Reliable**: Official tools are well-tested and maintained
|
||||
- **Compatible**: Better compatibility with other ComposeFS tools
|
||||
- **Secure**: Enhanced security with fs-verity support
|
||||
|
||||
### **Development Focus**
|
||||
- **Reduced Maintenance**: Less time maintaining custom implementation
|
||||
- **Enhanced Features**: Access to official tool features and improvements
|
||||
- **Community Alignment**: Better alignment with container community
|
||||
- **Standards Compliance**: Full compliance with official specifications
|
||||
|
||||
## 🎯 Next Steps
|
||||
|
||||
### **Immediate (Completed)**
|
||||
- ✅ Archive alternative implementation
|
||||
- ✅ Update documentation
|
||||
- ✅ Test official tools integration
|
||||
- ✅ Create archive notice
|
||||
|
||||
### **Short Term**
|
||||
- [ ] Test full integration workflow
|
||||
- [ ] Update dependency checking for package availability
|
||||
- [ ] Performance benchmarking
|
||||
- [ ] User documentation updates
|
||||
|
||||
### **Medium Term**
|
||||
- [ ] Package integration when available in repositories
|
||||
- [ ] Enhanced OSTree integration
|
||||
- [ ] Podman integration testing
|
||||
- [ ] Performance optimization
|
||||
|
||||
### **Long Term**
|
||||
- [ ] Flatpak integration
|
||||
- [ ] Cloud deployment optimization
|
||||
- [ ] Advanced features integration
|
||||
- [ ] Community adoption
|
||||
|
||||
## 🏆 Conclusion
|
||||
|
||||
This milestone represents a **major achievement** for Particle-OS:
|
||||
|
||||
1. **Production Readiness**: Official ComposeFS tools provide production-ready functionality
|
||||
2. **Standards Compliance**: Full compliance with official ComposeFS specification
|
||||
3. **Ecosystem Integration**: Better integration with container ecosystem
|
||||
4. **Maintenance Reduction**: Reduced maintenance burden with upstream tools
|
||||
5. **Future Proofing**: Positioned for future ComposeFS developments
|
||||
|
||||
The successful integration of official ComposeFS tools **validates Particle-OS's approach** and positions it as a **serious contender** in the immutable Ubuntu ecosystem. The archiving of the alternative implementation demonstrates **maturity and focus** on production-ready solutions.
|
||||
|
||||
**Particle-OS is now ready for production use with official ComposeFS tools!** 🚀
|
||||
|
|
@ -5,8 +5,8 @@ This document catalogs all scripts in the tools directory and their purposes.
|
|||
## Core Scripts (KEEP)
|
||||
|
||||
### Main Tools
|
||||
- **apt-layer.sh** - Main apt-layer tool. Mimicks rpm-ostree but for deb packages. (compiled from scriptlets)
|
||||
- **composefs-alternative.sh** - ComposeFS management tool (compiled from scriptlets)
|
||||
- **apt-layer.sh** - Main apt-layer tool. Mimicks rpm-ostree but for deb packages. (compiled from scriptlets, now supports atomic OSTree commits and robust overlay/dpkg install with official ComposeFS tools)
|
||||
- **composefs-alternative.sh** - ComposeFS management tool (archived; official ComposeFS tools are now default)
|
||||
- **bootc-alternative.sh** - BootC management tool (compiled from scriptlets)
|
||||
- **bootupd-alternative.sh** - BootUpd management tool (compiled from scriptlets)
|
||||
- **../orchestrator/orchestrator.sh** - Main orchestrator for all tools (moved to orchestrator directory)
|
||||
|
|
@ -37,42 +37,9 @@ This document catalogs all scripts in the tools directory and their purposes.
|
|||
- **compile-windows.bat** - Windows batch compilation script
|
||||
- **compile-windows.ps1** - Windows PowerShell compilation script
|
||||
|
||||
## Redundant Fix Scripts (MOVE TO ARCHIVE)
|
||||
## Redundant Fix Scripts (ARCHIVED)
|
||||
|
||||
These scripts were created during development to fix specific issues but are now redundant:
|
||||
|
||||
### Permission Fixes
|
||||
- **fix-system-permissions.sh** - Fixed system permissions (redundant)
|
||||
- **fix-apt-layer-permissions.sh** - Fixed apt-layer permissions (redundant)
|
||||
- **fix-apt-layer-permissions-final.sh** - Final apt-layer permission fix (redundant)
|
||||
- **fix-permissions-complete.sh** - Complete permission fix (redundant)
|
||||
|
||||
### Function Fixes
|
||||
- **fix-missing-functions.sh** - Fixed missing functions (redundant)
|
||||
- **fix-remaining-tools.sh** - Fixed remaining tools (redundant)
|
||||
- **fix-all-particle-tools.sh** - Fixed all tools (redundant)
|
||||
|
||||
### Configuration Fixes
|
||||
- **fix-config.sh** - Fixed configuration (redundant)
|
||||
- **fix-config-better.sh** - Better configuration fix (redundant)
|
||||
- **create-clean-config.sh** - Created clean config (redundant)
|
||||
- **restore-config.sh** - Restored configuration (redundant)
|
||||
- **setup-directories.sh** - Setup directories (redundant)
|
||||
|
||||
### Help Fixes
|
||||
- **fix-help-syntax.sh** - Fixed help syntax (redundant)
|
||||
- **final-help-fix.sh** - Final help fix (redundant)
|
||||
- **comprehensive-fix.sh** - Comprehensive fix (redundant)
|
||||
|
||||
### Quick Fixes
|
||||
- **quick-fix-particle-os.sh** - Quick fix (redundant)
|
||||
|
||||
### Testing Scripts
|
||||
- **test-source-logging.sh** - Test source logging (redundant)
|
||||
- **test-source-logging-fixed.sh** - Test fixed source logging (redundant)
|
||||
- **test-logging-functions.sh** - Test logging functions (redundant)
|
||||
- **test-line-endings.sh** - Test line endings (redundant)
|
||||
- **dos2unix.sh** - Convert line endings (redundant)
|
||||
All fix and test scripts have been moved to archive/ for historical reference. The workspace is now clean and only contains essential scripts for development and deployment.
|
||||
|
||||
## Source Code (KEEP)
|
||||
|
||||
|
|
@ -95,26 +62,7 @@ These scripts were created during development to fix specific issues but are now
|
|||
## Archive (ALREADY ARCHIVED)
|
||||
|
||||
The archive directory contains:
|
||||
- Old test scripts
|
||||
- All old test scripts and fix scripts
|
||||
- Previous versions of tools
|
||||
- Deprecated integration scripts
|
||||
- Backup files
|
||||
|
||||
## Cleanup Actions Required
|
||||
|
||||
1. **Move redundant fix scripts to archive/**
|
||||
2. **Update documentation to reflect current state**
|
||||
3. **Remove references to archived scripts from documentation**
|
||||
4. **Keep only the essential scripts for development and deployment**
|
||||
|
||||
## Essential Scripts for Development
|
||||
|
||||
For development work, you only need:
|
||||
- Source scriptlets in `src/` directories
|
||||
- Compilation scripts in each `src/` directory
|
||||
- Main compiled tools (apt-layer.sh, etc.)
|
||||
- Installation scripts
|
||||
- Testing scripts
|
||||
- Documentation
|
||||
|
||||
All fix scripts can be safely archived as their fixes have been incorporated into the source scriptlets.
|
||||
- Backup files
|
||||
|
|
@ -148,8 +148,15 @@ sudo ./install-particle-os.sh
|
|||
# Test apt-layer package management
|
||||
apt-layer install-packages curl wget
|
||||
|
||||
# Test composefs image creation
|
||||
composefs-alternative create test-image /tmp/test-source
|
||||
# Test atomic OSTree workflow
|
||||
apt-layer ostree compose install curl wget
|
||||
apt-layer ostree log
|
||||
apt-layer ostree status
|
||||
apt-layer ostree rollback
|
||||
|
||||
# Test ComposeFS image creation (official tools)
|
||||
mkcomposefs testdir test.cfs
|
||||
composefs-fuse test.cfs /mnt/test-cfs
|
||||
|
||||
# Test bootc image building
|
||||
bootc-alternative build test-image
|
||||
|
|
@ -180,6 +187,21 @@ time composefs-alternative create large-image /large-source
|
|||
time particle-orchestrator deploy-image test-image
|
||||
```
|
||||
|
||||
## Overlay/dpkg Install Workflow
|
||||
|
||||
# Download .deb files on host
|
||||
sudo apt-get install --download-only htop
|
||||
|
||||
# Copy .deb files to overlay
|
||||
sudo cp /var/cache/apt/archives/*.deb /var/lib/particle-os/live-overlay/mount/tmp/packages/
|
||||
|
||||
# Install in overlay with dpkg
|
||||
sudo chroot /var/lib/particle-os/live-overlay/mount dpkg -i /tmp/packages/*.deb
|
||||
|
||||
# Clean up before commit
|
||||
sudo rm -rf /var/lib/particle-os/live-overlay/mount/tmp/packages/
|
||||
sudo rm -rf /var/lib/particle-os/live-overlay/mount/var/cache/apt/archives/*
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Debug Mode
|
||||
|
|
|
|||
14542
apt-layer.sh
14542
apt-layer.sh
File diff suppressed because it is too large
Load diff
|
|
@ -8,11 +8,13 @@ This document compares Particle-OS tools against their official counterparts and
|
|||
|
||||
### **apt-layer (Particle-OS)**
|
||||
- **Package Manager**: apt/dpkg (Ubuntu/Debian)
|
||||
- **Backend**: ComposeFS (squashfs + overlayfs)
|
||||
- **Architecture**: Layer-based with live overlay system
|
||||
- **Backend**: ComposeFS (official tools, fallback to archived alternative)
|
||||
- **Architecture**: Commit-based with atomic OSTree updates, robust overlay/dpkg install workflow
|
||||
- **Target**: Ubuntu-based immutable systems
|
||||
- **Features**:
|
||||
- Atomic OSTree commit per package operation
|
||||
- Live package installation without reboot
|
||||
- Robust overlay/dpkg install (offline .deb support, DNS fixes)
|
||||
- Container-based layer creation (Apx-style)
|
||||
- OCI export/import integration
|
||||
- Multi-tenant support
|
||||
|
|
@ -67,15 +69,7 @@ This document compares Particle-OS tools against their official counterparts and
|
|||
|
||||
## 2. composefs-alternative vs Official ComposeFS
|
||||
|
||||
### **composefs-alternative (Particle-OS)**
|
||||
- **Implementation**: Shell script wrapper around official tools
|
||||
- **Features**:
|
||||
- Image creation from directories
|
||||
- Layer management and mounting
|
||||
- Content verification with hash checking
|
||||
- Integration with apt-layer and bootc-alternative
|
||||
- Backup and rollback capabilities
|
||||
- Multi-format support (squashfs, overlayfs)
|
||||
### **composefs-alternative.sh** is now archived. Official ComposeFS tools (`mkcomposefs`, `mount.composefs`) are used by default for all atomic package management in apt-layer. Fallback to the alternative is available if needed.
|
||||
|
||||
**How composefs-alternative Handles Overlayfs**:
|
||||
|
||||
|
|
@ -478,3 +472,6 @@ The combination of apt-layer, composefs-alternative, bootc-alternative, dracut-m
|
|||
**However**, the official tools offer **production-proven reliability** and **enterprise-grade stability** that should not be overlooked when choosing a solution for critical environments.
|
||||
|
||||
**Recommendation**: Use Particle-OS for **innovation and Ubuntu-specific features**, but consider official tools for **production-critical deployments** where stability and community support are paramount.
|
||||
|
||||
## Archive and Workspace Status
|
||||
All test/fix scripts have been archived and the workspace is clean. Only essential scripts and documentation remain in the root directory.
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -27,16 +27,10 @@ Particle-OS is built with a simple philosophy: **desktop computing should be sim
|
|||
### Core Components
|
||||
|
||||
#### 1. **apt-layer** - Atomic Package Management
|
||||
- Ubuntu package management with atomic transactions
|
||||
- Live overlay system for safe package installation
|
||||
- Rollback capabilities for failed updates
|
||||
- Desktop-friendly package management
|
||||
- Ubuntu package management with atomic transactions, live overlay system, rollback capabilities, and now true atomic OSTree commits per package operation. The new workflow supports offline .deb install, robust overlay system, and DNS fixes for WSL environments. Official ComposeFS tools are used for all image creation and mounting.
|
||||
|
||||
#### 2. **composefs-alternative** - Layered Filesystem
|
||||
- Content-addressable layered filesystem using overlayfs
|
||||
- Efficient storage and fast boot times
|
||||
- Desktop-optimized layer management
|
||||
- Simple layer creation and management
|
||||
- Content-addressable layered filesystem using overlayfs. **Note:** composefs-alternative.sh is now archived; official ComposeFS tools (`mkcomposefs`, `mount.composefs`) are used by default for all atomic package management in apt-layer. Fallback to the alternative is available if needed.
|
||||
|
||||
#### 3. **bootupd-alternative** - Bootloader Management
|
||||
- UEFI and GRUB integration for desktop systems
|
||||
247
plan.md
247
plan.md
|
|
@ -1,247 +0,0 @@
|
|||
# Particle-OS Development Plan
|
||||
|
||||
## 🎯 **EXECUTIVE SUMMARY**
|
||||
|
||||
Particle-OS is an immutable Ubuntu-based operating system inspired by uBlue-OS, Bazzite, and Fedora uCore. The system provides atomic, layered system updates using Ubuntu-specific tools and technologies, filling a gap in the Ubuntu ecosystem for immutable system management.
|
||||
|
||||
**Current Status**: B+ (Good with room for enhancement)
|
||||
**Next Phase**: Production Readiness & Security Enhancement
|
||||
**Timeline**: 3-6 months to production-ready status
|
||||
|
||||
## 📊 **CURRENT STATE ASSESSMENT**
|
||||
|
||||
### ✅ **COMPLETED MAJOR MILESTONES**
|
||||
- **Particle-OS Rebranding** - Complete system rebranding from uBlue-OS to Particle-OS
|
||||
- **Script Location Standardization** - Professional installation system with `/usr/local/bin/` deployment
|
||||
- **Self-Initialization System** - `--init` and `--reset` commands for automatic setup
|
||||
- **Enhanced Error Messages** - Comprehensive dependency checking and actionable error messages
|
||||
- **Source Scriptlet Updates** - All runtime improvements now reflected in source files
|
||||
- **OCI Integration Fixes** - Configurable paths and Particle-OS branding
|
||||
- **Codebase Cleanup** - Moved all redundant fix scripts to archive, organized essential scripts
|
||||
- **DKMS Testing Infrastructure** - Comprehensive DKMS test suite created with 12 test cases
|
||||
- **Help Output Optimization** - Concise, rpm-ostree-style help output implemented
|
||||
- **Version Command Implementation** - Professional version output with compilation time and features
|
||||
- **Bazzite-Style Status Implementation** - Professional deployment tracking with staged/booted/rollback images
|
||||
|
||||
### 🔄 **CURRENT PRIORITIES**
|
||||
1. **Test installation system** - Validate the standardized installation on VM
|
||||
2. **Component testing** - Test ComposeFS, apt-layer, bootc, and bootupd functionality
|
||||
3. **Integration testing** - Test full workflow from layer creation to boot
|
||||
4. **Run DKMS tests on VM** - Execute comprehensive DKMS test suite on target system
|
||||
5. **Compilation system enhancements** - Add dependency checking to compile scripts
|
||||
|
||||
## 🚀 **PHASE 1: IMMEDIATE ACTIONS (Weeks 1-2)**
|
||||
|
||||
### **Testing & Validation**
|
||||
- [ ] **Install and test standardized scripts** - Run `sudo ./install-particle-os.sh` on VM
|
||||
- [ ] **Verify tool accessibility** - Confirm all tools are in PATH and executable
|
||||
- [ ] **Test basic commands** - Run `--help` and `--version` on all tools
|
||||
- [ ] **Verify configuration** - Check that particle-config.sh is properly loaded
|
||||
- [ ] **Run DKMS test suite** - Execute `test-dkms-functionality.sh` on target system
|
||||
|
||||
### **Component Testing**
|
||||
- [ ] **Test apt-layer** - Create a minimal layer from Ubuntu base
|
||||
- [ ] **Test composefs** - Create and mount a simple image
|
||||
- [ ] **Test bootc** - Build a bootable image from a ComposeFS layer
|
||||
- [ ] **Test bootupd** - Add a boot entry for a ComposeFS/bootc image
|
||||
|
||||
### **Integration Testing**
|
||||
- [ ] **Test apt-layer + composefs** - Layer packages and verify atomicity
|
||||
- [ ] **Test bootc + composefs** - Boot a layered image in QEMU/VM
|
||||
- [ ] **Test orchestrator** - Run a full transaction (install, rollback, update)
|
||||
- [ ] **Test full workflow** - Complete pipeline from layer creation to boot
|
||||
|
||||
## 🔧 **PHASE 2: PRODUCTION READINESS (Weeks 3-8)**
|
||||
|
||||
### **High Priority Enhancements**
|
||||
|
||||
#### **2.1 Official ComposeFS Integration**
|
||||
- [ ] **Install EROFS utilities** - `sudo apt install erofs-utils erofsfuse`
|
||||
- [ ] **Test EROFS functionality** - Verify mkfs.erofs and mount.erofs work correctly
|
||||
- [ ] **Integrate with composefs-alternative** - Use EROFS for metadata trees
|
||||
- [ ] **Add EROFS compression** - Implement LZ4 and Zstandard compression
|
||||
- [ ] **Test EROFS performance** - Benchmark against current SquashFS approach
|
||||
- [ ] **Add detection and fallback logic** - Graceful fallback when tools aren't available
|
||||
- [ ] **Implement fs-verity** - Add filesystem integrity verification
|
||||
|
||||
#### **2.2 Enhanced Security with skopeo**
|
||||
- [ ] **Replace container runtime inspection** - Use skopeo inspect instead of podman/docker inspect
|
||||
- [ ] **Add signature verification** - Use skopeo for image signature verification
|
||||
- [ ] **Implement digest comparison** - Use skopeo for proper digest comparison
|
||||
- [ ] **Add direct registry operations** - Use skopeo for registry operations
|
||||
- [ ] **Enhance security scanning** - Use skopeo for image vulnerability scanning
|
||||
- [ ] **Add format conversion support** - Use skopeo for converting between formats
|
||||
- [ ] **Update bootc-alternative.sh** - Replace current skopeo usage with enhanced integration
|
||||
|
||||
#### **2.3 Production-Ready BootC**
|
||||
- [ ] **Evaluate Rust-based BootC** - Assess official BootC for production deployments
|
||||
- [ ] **Keep current shell implementation** - Maintain Ubuntu-specific features
|
||||
- [ ] **Add comprehensive container validation** - Beyond current checks
|
||||
- [ ] **Implement Kubernetes-native patterns** - Add Kubernetes integration
|
||||
- [ ] **Add memory safety considerations** - Address shell script limitations
|
||||
|
||||
### **Medium Priority Improvements**
|
||||
|
||||
#### **2.4 Bootupd Simplification**
|
||||
- [ ] **Install overlayroot** - `sudo apt install overlayroot`
|
||||
- [ ] **Test overlayroot functionality** - Verify read-only root with overlayfs works
|
||||
- [ ] **Integrate with dracut-module** - Use overlayroot for boot-time immutability
|
||||
- [ ] **Focus on UEFI/systemd-boot** - Simplify to modern bootloader support
|
||||
- [ ] **Add secure boot integration** - Implement secure boot capabilities
|
||||
- [ ] **Add bootloader signing** - Implement trusted boot capabilities
|
||||
|
||||
#### **2.5 Performance Optimization**
|
||||
- [ ] **Add parallel hash generation** - For large directories
|
||||
- [ ] **Implement layer caching** - For frequently used components
|
||||
- [ ] **Add memory-efficient streaming** - Optimize memory usage
|
||||
- [ ] **Optimize overlayfs mounting** - Performance tuning for overlayfs
|
||||
- [ ] **Add compression optimization** - zstd:chunked support
|
||||
|
||||
## 📈 **PHASE 3: ADVANCED FEATURES (Weeks 9-16)**
|
||||
|
||||
### **Comprehensive Testing**
|
||||
- [ ] **Create automated test suite** - For ComposeFS operations
|
||||
- [ ] **Add integration tests** - For bootc deployment pipeline
|
||||
- [ ] **Implement bootupd testing** - Functionality testing
|
||||
- [ ] **Add performance benchmarking** - Performance testing
|
||||
- [ ] **Create security validation** - Security testing
|
||||
|
||||
### **Monitoring and Health Checks**
|
||||
- [ ] **Implement system health monitoring** - System health checks
|
||||
- [ ] **Add performance metrics collection** - Performance monitoring
|
||||
- [ ] **Create alerting for system issues** - Alerting system
|
||||
- [ ] **Add diagnostic tools** - Troubleshooting tools
|
||||
- [ ] **Implement automated recovery** - Recovery procedures
|
||||
|
||||
### **Documentation Enhancement**
|
||||
- [ ] **Add production deployment guides** - Production documentation
|
||||
- [ ] **Create troubleshooting documentation** - Troubleshooting guides
|
||||
- [ ] **Add performance tuning guides** - Performance documentation
|
||||
- [ ] **Create security hardening documentation** - Security guides
|
||||
- [ ] **Add migration guides** - Migration documentation
|
||||
|
||||
## 🎯 **PHASE 4: ECOSYSTEM INTEGRATION (Weeks 17-24)**
|
||||
|
||||
### **Ubuntu Ecosystem Integration**
|
||||
- [ ] **Test fuse-overlayfs** - Evaluate for rootless container support
|
||||
- [ ] **Add overlayfs optimization** - Implement performance tuning
|
||||
- [ ] **Update dependency checking** - Add EROFS and overlayfs tools
|
||||
- [ ] **Add package installation** - Include tools in installation scripts
|
||||
- [ ] **Create configuration options** - Allow users to choose between tools
|
||||
- [ ] **Document tool usage** - Create guides for using tools
|
||||
|
||||
### **Enterprise Features**
|
||||
- [ ] **Multi-tenant support** - Enterprise multi-tenant capabilities
|
||||
- [ ] **Compliance frameworks** - Regulatory compliance features
|
||||
- [ ] **Enterprise integration** - Enterprise system integration
|
||||
- [ ] **Cloud integration** - Cloud platform integration
|
||||
- [ ] **Kubernetes integration** - Kubernetes-native features
|
||||
|
||||
## 📋 **IMPLEMENTATION DETAILS**
|
||||
|
||||
### **Technical Architecture**
|
||||
|
||||
#### **Current Implementation**
|
||||
- **ComposeFS**: Shell + SquashFS + overlayfs
|
||||
- **BootC**: Container → ComposeFS → OSTree
|
||||
- **Bootupd**: Multi-bootloader management
|
||||
- **OCI Integration**: Container runtime wrapper
|
||||
|
||||
#### **Target Implementation**
|
||||
- **ComposeFS**: C + EROFS + fs-verity (official tools)
|
||||
- **BootC**: Container → OSTree (official BootC)
|
||||
- **Bootupd**: UEFI + systemd-boot (simplified)
|
||||
- **OCI Integration**: skopeo + containers/storage
|
||||
|
||||
### **Integration Examples**
|
||||
|
||||
#### **EROFS Integration**
|
||||
```bash
|
||||
# Check for EROFS availability and use it
|
||||
if command -v mkfs.erofs >/dev/null 2>&1; then
|
||||
echo "Using EROFS for metadata tree"
|
||||
mkfs.erofs -zlz4 "$metadata_tree" "$source_dir"
|
||||
mount -t erofs "$metadata_tree" "$mount_point"
|
||||
else
|
||||
echo "Falling back to SquashFS"
|
||||
mksquashfs "$source_dir" "$squashfs_file" -comp lz4
|
||||
mount -t squashfs "$squashfs_file" "$mount_point"
|
||||
fi
|
||||
```
|
||||
|
||||
#### **skopeo Integration**
|
||||
```bash
|
||||
# Add skopeo for secure image handling
|
||||
if command -v skopeo >/dev/null 2>&1; then
|
||||
# Use skopeo for image inspection and verification
|
||||
skopeo inspect "docker://$image"
|
||||
skopeo copy "docker://$image" "oci:$local_path"
|
||||
else
|
||||
# Fall back to container runtime
|
||||
podman pull "$image"
|
||||
fi
|
||||
```
|
||||
|
||||
#### **Overlayroot Integration**
|
||||
```bash
|
||||
# Use overlayroot for read-only root filesystem
|
||||
if command -v overlayroot >/dev/null 2>&1; then
|
||||
echo "Using overlayroot for immutable root"
|
||||
overlayroot-chroot /bin/bash
|
||||
else
|
||||
echo "Using manual overlayfs setup"
|
||||
mount -t overlay overlay -o "lowerdir=/,upperdir=/tmp/upper,workdir=/tmp/work" /mnt/overlay
|
||||
fi
|
||||
```
|
||||
|
||||
## 🎯 **SUCCESS METRICS**
|
||||
|
||||
### **Technical Metrics**
|
||||
- **Performance**: 50% improvement in image build times
|
||||
- **Security**: 100% fs-verity coverage for all images
|
||||
- **Reliability**: 99.9% uptime for production deployments
|
||||
- **Compatibility**: 100% Ubuntu 22.04+ compatibility
|
||||
- **Integration**: Seamless integration with official tools
|
||||
|
||||
### **User Experience Metrics**
|
||||
- **Ease of Use**: Simple installation and configuration
|
||||
- **Documentation**: Comprehensive guides and examples
|
||||
- **Error Handling**: Clear, actionable error messages
|
||||
- **Recovery**: Fast rollback and recovery procedures
|
||||
- **Support**: Active community and documentation
|
||||
|
||||
## 🚨 **RISK MITIGATION**
|
||||
|
||||
### **Technical Risks**
|
||||
- **Dependency on external tools**: Implement fallback mechanisms
|
||||
- **Performance degradation**: Comprehensive benchmarking
|
||||
- **Security vulnerabilities**: Regular security audits
|
||||
- **Compatibility issues**: Extensive testing on target systems
|
||||
|
||||
### **Project Risks**
|
||||
- **Scope creep**: Focus on core functionality first
|
||||
- **Resource constraints**: Prioritize high-impact features
|
||||
- **Timeline delays**: Agile development with regular milestones
|
||||
- **Quality issues**: Comprehensive testing and validation
|
||||
|
||||
## 📅 **TIMELINE SUMMARY**
|
||||
|
||||
| Phase | Duration | Focus | Key Deliverables |
|
||||
|-------|----------|-------|------------------|
|
||||
| **Phase 1** | Weeks 1-2 | Testing & Validation | Working system, validated components |
|
||||
| **Phase 2** | Weeks 3-8 | Production Readiness | EROFS integration, skopeo security, official tools |
|
||||
| **Phase 3** | Weeks 9-16 | Advanced Features | Testing, monitoring, documentation |
|
||||
| **Phase 4** | Weeks 17-24 | Ecosystem Integration | Enterprise features, cloud integration |
|
||||
|
||||
## 🎯 **CONCLUSION**
|
||||
|
||||
Particle-OS has a solid foundation with a well-designed architecture. The main areas for improvement focus on:
|
||||
|
||||
- **Production readiness**: Integrating official tools where appropriate
|
||||
- **Security**: Adding fs-verity and skopeo integration
|
||||
- **Performance**: Optimizing with parallel processing and caching
|
||||
- **Ecosystem integration**: Leveraging Ubuntu's native tools
|
||||
|
||||
The approach of creating Ubuntu-specific alternatives to Fedora/RHEL tools is valid and fills a real need in the ecosystem. The modular scriptlet architecture is maintainable and the integration between components is logical.
|
||||
|
||||
**Next Action**: Begin Phase 1 testing and validation on target VM system.
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Set current deployment for testing rollback
|
||||
|
||||
cd /mnt/c/Users/rob/Documents/Projects/Particle-OS/tools
|
||||
|
||||
# Set current deployment to the most recent commit
|
||||
sudo jq '.current_deployment = "commit-20250714-002436-8745"' /var/lib/particle-os/deployments.json > /tmp/deployments.json
|
||||
sudo mv /tmp/deployments.json /var/lib/particle-os/deployments.json
|
||||
|
||||
echo "Current deployment set to: commit-20250714-002436-8745"
|
||||
echo "Now you can test rollback with: sudo ./apt-layer.sh ostree rollback"
|
||||
|
|
@ -7,6 +7,57 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
|
||||
## [Unreleased]
|
||||
|
||||
### [2025-07-14 UTC] - SCOPE REDUCTION: FOCUS ON CORE RPM-OSTREE FEATURES ONLY
|
||||
- **Scope reduction completed**: Archived all advanced, enterprise, cloud, multi-tenant, admin, compliance, and security features.
|
||||
- **Now focused on core rpm-ostree-like features for apt/Debian systems only:**
|
||||
- Atomic deployment, rollback, status, diff, cleanup
|
||||
- Live overlay and container-based layering
|
||||
- Bootloader and kargs management
|
||||
- OCI/ComposeFS integration
|
||||
- Direct dpkg install for apt/deb systems
|
||||
- All core rpm-ostree-like features for apt/Debian
|
||||
- **compile.sh updated**: Only includes core scriptlets; all advanced/enterprise scriptlets removed from build.
|
||||
- **TODO updated**: All advanced/enterprise/cloud/multi-tenant/admin/compliance/security items removed or marked as archived; TODO now only tracks core atomic/OSTree/overlay/bootloader/compatibility features.
|
||||
- **Documentation updated**: All documentation and script inventory reflect the new, reduced scope.
|
||||
- **Advanced features archived**: All advanced/enterprise scriptlets are safely archived and can be restored later if needed.
|
||||
- **Result**: Codebase is now a true rpm-ostree equivalent for apt/Debian systems, with no extra enterprise/cloud/advanced features.
|
||||
|
||||
### [2025-01-27 23:58 UTC] - DOCUMENTATION UPDATES AND WORKSPACE CLEANUP COMPLETED
|
||||
- **Comprehensive documentation updates completed**: Updated all major documentation files to reflect current Particle-OS capabilities and recent improvements.
|
||||
- **Main documentation files updated**: Updated multiple documentation files to reflect new apt-layer atomic OSTree workflow and official ComposeFS tool integration:
|
||||
- `tools.md` - Updated to reflect atomic OSTree workflow, official ComposeFS integration, and overlay/dpkg improvements
|
||||
- `TODO.md` - Updated completion status and added new priorities for compilation system enhancements
|
||||
- `TESTING_GUIDE.md` - Updated to include OSTree/atomic testing procedures and overlay workflow validation
|
||||
- `SCRIPT_INVENTORY.md` - Updated to reflect current script organization and archiving of alternative implementations
|
||||
- `Readme.md` - Updated main project README with current capabilities and recent improvements
|
||||
- `comparisons.md` - Updated feature comparisons to reflect current Particle-OS capabilities
|
||||
- **Workspace cleanup completed**: Moved old test and fix scripts to `archive/` directory for better organization:
|
||||
- Archived 30+ test scripts, fix scripts, and development utilities
|
||||
- Maintained clean workspace with only current, production-ready scripts
|
||||
- Preserved historical development artifacts for reference
|
||||
- **Alternative ComposeFS implementation archived**: Moved `composefs-alternative.sh` and related files to archive:
|
||||
- Official ComposeFS package now ready for Debian/Ubuntu sponsorship
|
||||
- Alternative implementation preserved for reference and potential future use
|
||||
- Updated documentation to reflect official tool integration approach
|
||||
- **Overlay and dpkg install improvements documented**: Updated documentation to reflect recent workflow improvements:
|
||||
- Robust overlay/dpkg install workflow with DNS fixes for WSL environments
|
||||
- Support for offline `.deb` package installation via dpkg in overlay
|
||||
- Conditional DNS server injection to resolve network connectivity issues
|
||||
- Clean overlay commit and rollback procedures
|
||||
- **OSTree atomic workflow documentation**: Updated all documentation to reflect new atomic package management:
|
||||
- `apt-layer ostree compose install/remove/update` commands for atomic, versioned package management
|
||||
- `apt-layer ostree log/diff/status/rollback/cleanup` commands for commit history and management
|
||||
- Integration with live overlay and dpkg install workflows
|
||||
- Rollback functionality with proper deployment management
|
||||
- **Git ignore updates**: Added `.scratchpad` and `scratchpad/` directories to `.gitignore`:
|
||||
- `.scratchpad` already properly ignored
|
||||
- `scratchpad/` directory added to ignore list for development artifacts
|
||||
- **Documentation consistency**: Ensured all documentation files reflect current system state:
|
||||
- Consistent terminology and feature descriptions across all files
|
||||
- Updated completion status and roadmap information
|
||||
- Current capabilities and testing procedures documented
|
||||
- **Note**: Documentation is now fully up-to-date and workspace is clean and organized. All recent improvements including OSTree atomic workflow, official ComposeFS integration, and overlay/dpkg improvements are properly documented across all project files.
|
||||
|
||||
### [2025-01-27 23:55 UTC] - DKMS TESTING INFRASTRUCTURE COMPLETED
|
||||
- **DKMS testing infrastructure implemented**: Created comprehensive DKMS testing system to validate all DKMS functionality in Particle-OS.
|
||||
- **Comprehensive test suite created**: Created `test-dkms-functionality.sh` with 12 comprehensive test cases covering all DKMS functionality:
|
||||
|
|
|
|||
|
|
@ -340,70 +340,28 @@ add_scriptlet "03-traditional.sh" "Traditional Layer Creation"
|
|||
update_progress "Adding: Container-based Layer Creation" 31
|
||||
add_scriptlet "04-container.sh" "Container-based Layer Creation (Apx-style)"
|
||||
|
||||
update_progress "Adding: OCI Integration" 36
|
||||
add_scriptlet "06-oci-integration.sh" "OCI Export/Import Integration"
|
||||
|
||||
update_progress "Adding: Atomic Deployment System" 41
|
||||
add_scriptlet "09-atomic-deployment.sh" "Atomic Deployment System"
|
||||
|
||||
update_progress "Adding: rpm-ostree Compatibility" 46
|
||||
add_scriptlet "10-rpm-ostree-compat.sh" "rpm-ostree Compatibility Layer"
|
||||
|
||||
update_progress "Adding: Live Overlay System" 51
|
||||
update_progress "Adding: Live Overlay System" 36
|
||||
add_scriptlet "05-live-overlay.sh" "Live Overlay System (rpm-ostree style)"
|
||||
|
||||
update_progress "Adding: Bootloader Integration" 56
|
||||
update_progress "Adding: OCI Integration" 41
|
||||
add_scriptlet "06-oci-integration.sh" "OCI Export/Import Integration"
|
||||
|
||||
update_progress "Adding: Bootloader Integration" 46
|
||||
add_scriptlet "07-bootloader.sh" "Bootloader Integration (UEFI/GRUB/systemd-boot)"
|
||||
|
||||
update_progress "Adding: Advanced Package Management" 61
|
||||
add_scriptlet "08-advanced-package-management.sh" "Advanced Package Management (Enterprise Features)"
|
||||
update_progress "Adding: Atomic Deployment System" 51
|
||||
add_scriptlet "09-atomic-deployment.sh" "Atomic Deployment System"
|
||||
|
||||
update_progress "Adding: Layer Signing & Verification" 66
|
||||
add_scriptlet "11-layer-signing.sh" "Layer Signing & Verification (Enterprise Security)"
|
||||
update_progress "Adding: rpm-ostree Compatibility" 56
|
||||
add_scriptlet "10-rpm-ostree-compat.sh" "rpm-ostree Compatibility Layer"
|
||||
|
||||
update_progress "Adding: Centralized Audit & Reporting" 71
|
||||
add_scriptlet "12-audit-reporting.sh" "Centralized Audit & Reporting (Enterprise Compliance)"
|
||||
|
||||
update_progress "Adding: Automated Security Scanning" 76
|
||||
add_scriptlet "13-security-scanning.sh" "Automated Security Scanning (Enterprise Security)"
|
||||
|
||||
update_progress "Adding: Admin Utilities" 81
|
||||
add_scriptlet "14-admin-utilities.sh" "Admin Utilities (Health Monitoring, Analytics, Maintenance)"
|
||||
|
||||
update_progress "Adding: Multi-Tenant Support" 86
|
||||
add_scriptlet "15-multi-tenant.sh" "Multi-Tenant Support (Enterprise Features)"
|
||||
|
||||
update_progress "Adding: OSTree Atomic Package Management" 87
|
||||
update_progress "Adding: OSTree Atomic Package Management" 61
|
||||
add_scriptlet "15-ostree-atomic.sh" "OSTree Atomic Package Management"
|
||||
|
||||
update_progress "Adding: Advanced Compliance Frameworks" 88
|
||||
add_scriptlet "16-compliance-frameworks.sh" "Advanced Compliance Frameworks (Enterprise Features)"
|
||||
|
||||
update_progress "Adding: Enterprise Integration" 89
|
||||
add_scriptlet "17-enterprise-integration.sh" "Enterprise Integration (Enterprise Features)"
|
||||
|
||||
update_progress "Adding: Advanced Monitoring & Alerting" 90
|
||||
add_scriptlet "18-monitoring-alerting.sh" "Advanced Monitoring & Alerting (Enterprise Features)"
|
||||
|
||||
update_progress "Adding: Cloud Integration" 91
|
||||
add_scriptlet "19-cloud-integration.sh" "Cloud Integration (AWS, Azure, GCP)"
|
||||
|
||||
update_progress "Adding: Kubernetes Integration" 92
|
||||
add_scriptlet "20-kubernetes-integration.sh" "Kubernetes Integration (EKS, AKS, GKE, OpenShift)"
|
||||
|
||||
update_progress "Adding: Container Orchestration" 93
|
||||
add_scriptlet "21-container-orchestration.sh" "Container Orchestration (Multi-cluster, Service Mesh, GitOps)"
|
||||
|
||||
update_progress "Adding: Multi-Cloud Deployment" 94
|
||||
add_scriptlet "22-multicloud-deployment.sh" "Multi-Cloud Deployment (AWS, Azure, GCP, Migration, Policies)"
|
||||
|
||||
update_progress "Adding: Cloud-Native Security" 95
|
||||
add_scriptlet "23-cloud-security.sh" "Cloud-Native Security (Workload Scanning, Policy Enforcement, Compliance)"
|
||||
|
||||
update_progress "Adding: Direct dpkg Installation" 96
|
||||
update_progress "Adding: Direct dpkg Installation" 66
|
||||
add_scriptlet "24-dpkg-direct-install.sh" "Direct dpkg Installation (Performance Optimization)"
|
||||
|
||||
update_progress "Adding: Main Dispatch" 97
|
||||
update_progress "Adding: Main Dispatch" 71
|
||||
add_scriptlet "99-main.sh" "Main Dispatch and Help" "true"
|
||||
|
||||
# Add embedded configuration files if they exist
|
||||
|
|
@ -533,28 +491,22 @@ print_status "Lines of code: $(wc -l < "$OUTPUT_FILE")"
|
|||
|
||||
print_status ""
|
||||
print_status "The compiled apt-layer.sh is now self-contained and includes:"
|
||||
print_status "<22> Particle-OS configuration integration"
|
||||
print_status "<22> Transactional operations with automatic rollback"
|
||||
print_status "<22> Traditional chroot-based layer creation"
|
||||
print_status "<22> Container-based layer creation (Apx-style)"
|
||||
print_status "<22> OCI export/import integration"
|
||||
print_status "<22> Live overlay system (rpm-ostree style)"
|
||||
print_status "<22> Bootloader integration (UEFI/GRUB/systemd-boot)"
|
||||
print_status "<22> Advanced package management (Enterprise features)"
|
||||
print_status "<22> Layer signing & verification (Enterprise security)"
|
||||
print_status "<22> Centralized audit & reporting (Enterprise compliance)"
|
||||
print_status "<22> Automated security scanning (Enterprise security)"
|
||||
print_status "<22> Admin utilities (Health monitoring, performance analytics, maintenance)"
|
||||
print_status "<22> Multi-tenant support (Enterprise features)"
|
||||
print_status "<22> Atomic deployment system with rollback"
|
||||
print_status "<22> rpm-ostree compatibility layer (1:1 command mapping)"
|
||||
print_status "<22> ComposeFS backend integration"
|
||||
print_status "<22> Dependency validation and error handling"
|
||||
print_status "<22> Comprehensive JSON configuration system"
|
||||
print_status "<22> Direct dpkg installation (Performance optimization)"
|
||||
print_status "<22> All dependencies merged into a single file"
|
||||
print_status "- Particle-OS configuration integration"
|
||||
print_status "- Transactional operations with automatic rollback"
|
||||
print_status "- Traditional chroot-based layer creation"
|
||||
print_status "- Container-based layer creation (Apx-style)"
|
||||
print_status "- OCI export/import integration"
|
||||
print_status "- Live overlay system (rpm-ostree style)"
|
||||
print_status "- Bootloader integration (UEFI/GRUB/systemd-boot)"
|
||||
print_status "- Atomic deployment system with rollback"
|
||||
print_status "- rpm-ostree compatibility layer (1:1 command mapping)"
|
||||
print_status "- ComposeFS backend integration"
|
||||
print_status "- Dependency validation and error handling"
|
||||
print_status "- Comprehensive JSON configuration system"
|
||||
print_status "- Direct dpkg installation (Performance optimization)"
|
||||
print_status "- All dependencies merged into a single file"
|
||||
print_status ""
|
||||
print_status "<EFBFBD> Particle-OS apt-layer compilation complete with all features!"
|
||||
print_status "Particle-OS apt-layer compilation complete with all core rpm-ostree-like features!"
|
||||
|
||||
print_status ""
|
||||
print_status "Usage:"
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -1,847 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Ubuntu uBlue apt-layer Layer Signing & Verification
|
||||
# Provides enterprise-grade layer signing and verification for immutable deployments
|
||||
# Supports Sigstore (cosign) for modern OCI-compatible signing and GPG for traditional workflows
|
||||
|
||||
# =============================================================================
|
||||
# LAYER SIGNING & VERIFICATION FUNCTIONS
|
||||
# =============================================================================
|
||||
|
||||
# Layer signing configuration (with fallbacks for when particle-config.sh is not loaded)
|
||||
LAYER_SIGNING_CONFIG_DIR="${UBLUE_CONFIG_DIR:-/etc/ubuntu-ublue}/layer-signing"
|
||||
LAYER_SIGNING_STATE_DIR="${UBLUE_ROOT:-/var/lib/particle-os}/layer-signing"
|
||||
LAYER_SIGNING_KEYS_DIR="$LAYER_SIGNING_STATE_DIR/keys"
|
||||
LAYER_SIGNING_SIGNATURES_DIR="$LAYER_SIGNING_STATE_DIR/signatures"
|
||||
LAYER_SIGNING_VERIFICATION_DIR="$LAYER_SIGNING_STATE_DIR/verification"
|
||||
LAYER_SIGNING_REVOCATION_DIR="$LAYER_SIGNING_STATE_DIR/revocation"
|
||||
|
||||
# Signing configuration
|
||||
LAYER_SIGNING_ENABLED="${LAYER_SIGNING_ENABLED:-true}"
|
||||
LAYER_SIGNING_METHOD="${LAYER_SIGNING_METHOD:-sigstore}" # sigstore, gpg, both
|
||||
LAYER_SIGNING_VERIFY_ON_IMPORT="${LAYER_SIGNING_VERIFY_ON_IMPORT:-true}"
|
||||
LAYER_SIGNING_VERIFY_ON_MOUNT="${LAYER_SIGNING_VERIFY_ON_MOUNT:-true}"
|
||||
LAYER_SIGNING_VERIFY_ON_ACTIVATE="${LAYER_SIGNING_VERIFY_ON_ACTIVATE:-true}"
|
||||
LAYER_SIGNING_FAIL_ON_VERIFY="${LAYER_SIGNING_FAIL_ON_VERIFY:-true}"
|
||||
|
||||
# Initialize layer signing system
|
||||
init_layer_signing() {
|
||||
log_info "Initializing layer signing and verification system" "apt-layer"
|
||||
|
||||
# Create layer signing directories
|
||||
mkdir -p "$LAYER_SIGNING_CONFIG_DIR" "$LAYER_SIGNING_STATE_DIR" "$LAYER_SIGNING_KEYS_DIR"
|
||||
mkdir -p "$LAYER_SIGNING_SIGNATURES_DIR" "$LAYER_SIGNING_VERIFICATION_DIR" "$LAYER_SIGNING_REVOCATION_DIR"
|
||||
|
||||
# Set proper permissions
|
||||
chmod 755 "$LAYER_SIGNING_CONFIG_DIR" "$LAYER_SIGNING_STATE_DIR"
|
||||
chmod 700 "$LAYER_SIGNING_KEYS_DIR" "$LAYER_SIGNING_SIGNATURES_DIR"
|
||||
chmod 750 "$LAYER_SIGNING_VERIFICATION_DIR" "$LAYER_SIGNING_REVOCATION_DIR"
|
||||
|
||||
# Initialize signing configuration
|
||||
init_signing_config
|
||||
|
||||
# Initialize key management
|
||||
init_key_management
|
||||
|
||||
# Initialize revocation system
|
||||
init_revocation_system
|
||||
|
||||
# Check signing tools availability
|
||||
check_signing_tools
|
||||
|
||||
log_success "Layer signing and verification system initialized" "apt-layer"
|
||||
}
|
||||
|
||||
# Initialize signing configuration
|
||||
init_signing_config() {
|
||||
local config_file="$LAYER_SIGNING_CONFIG_DIR/signing-config.json"
|
||||
|
||||
if [[ ! -f "$config_file" ]]; then
|
||||
cat > "$config_file" << EOF
|
||||
{
|
||||
"signing": {
|
||||
"enabled": true,
|
||||
"method": "sigstore",
|
||||
"verify_on_import": true,
|
||||
"verify_on_mount": true,
|
||||
"verify_on_activate": true,
|
||||
"fail_on_verify": true
|
||||
},
|
||||
"sigstore": {
|
||||
"enabled": true,
|
||||
"keyless": false,
|
||||
"fulcio_url": "https://fulcio.sigstore.dev",
|
||||
"rekor_url": "https://rekor.sigstore.dev",
|
||||
"tuf_url": "https://tuf.sigstore.dev"
|
||||
},
|
||||
"gpg": {
|
||||
"enabled": true,
|
||||
"keyring": "/etc/apt/trusted.gpg",
|
||||
"signing_key": "",
|
||||
"verification_keys": []
|
||||
},
|
||||
"key_management": {
|
||||
"local_keys": true,
|
||||
"hsm_support": false,
|
||||
"remote_key_service": false,
|
||||
"key_rotation_days": 365
|
||||
},
|
||||
"revocation": {
|
||||
"enabled": true,
|
||||
"check_revocation": true,
|
||||
"revocation_list_url": "",
|
||||
"local_revocation_list": true
|
||||
}
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$config_file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize key management
|
||||
init_key_management() {
|
||||
local key_db="$LAYER_SIGNING_KEYS_DIR/keys.json"
|
||||
|
||||
if [[ ! -f "$key_db" ]]; then
|
||||
cat > "$key_db" << EOF
|
||||
{
|
||||
"keys": {},
|
||||
"key_pairs": {},
|
||||
"public_keys": {},
|
||||
"key_metadata": {},
|
||||
"last_updated": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$key_db"
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize revocation system
|
||||
init_revocation_system() {
|
||||
local revocation_list="$LAYER_SIGNING_REVOCATION_DIR/revocation-list.json"
|
||||
|
||||
if [[ ! -f "$revocation_list" ]]; then
|
||||
cat > "$revocation_list" << EOF
|
||||
{
|
||||
"revoked_keys": {},
|
||||
"revoked_signatures": {},
|
||||
"revoked_layers": {},
|
||||
"revocation_reasons": {},
|
||||
"last_updated": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$revocation_list"
|
||||
fi
|
||||
}
|
||||
|
||||
# Check signing tools availability
|
||||
check_signing_tools() {
|
||||
log_info "Checking signing tools availability" "apt-layer"
|
||||
|
||||
local tools_available=true
|
||||
|
||||
# Check for cosign (Sigstore)
|
||||
if ! command -v cosign &>/dev/null; then
|
||||
log_warning "cosign (Sigstore) not found - Sigstore signing will be disabled" "apt-layer"
|
||||
LAYER_SIGNING_METHOD="gpg"
|
||||
else
|
||||
log_info "cosign (Sigstore) found: $(cosign version 2>/dev/null | head -1 || echo 'version unknown')" "apt-layer"
|
||||
fi
|
||||
|
||||
# Check for GPG
|
||||
if ! command -v gpg &>/dev/null; then
|
||||
log_warning "GPG not found - GPG signing will be disabled" "apt-layer"
|
||||
if [[ "$LAYER_SIGNING_METHOD" == "gpg" ]]; then
|
||||
LAYER_SIGNING_METHOD="sigstore"
|
||||
fi
|
||||
else
|
||||
log_info "GPG found: $(gpg --version | head -1)" "apt-layer"
|
||||
fi
|
||||
|
||||
# Check if any signing method is available
|
||||
if [[ "$LAYER_SIGNING_METHOD" == "both" ]] && ! command -v cosign &>/dev/null && ! command -v gpg &>/dev/null; then
|
||||
log_error "No signing tools available - layer signing will be disabled" "apt-layer"
|
||||
LAYER_SIGNING_ENABLED=false
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Generate signing key pair
|
||||
generate_signing_key_pair() {
|
||||
local key_name="$1"
|
||||
local key_type="${2:-sigstore}"
|
||||
|
||||
if [[ -z "$key_name" ]]; then
|
||||
log_error "Key name required for key pair generation" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Generating signing key pair: $key_name (type: $key_type)" "apt-layer"
|
||||
|
||||
case "$key_type" in
|
||||
"sigstore")
|
||||
generate_sigstore_key_pair "$key_name"
|
||||
;;
|
||||
"gpg")
|
||||
generate_gpg_key_pair "$key_name"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported key type: $key_type" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Generate Sigstore key pair
|
||||
generate_sigstore_key_pair() {
|
||||
local key_name="$1"
|
||||
local key_dir="$LAYER_SIGNING_KEYS_DIR/sigstore/$key_name"
|
||||
|
||||
mkdir -p "$key_dir"
|
||||
|
||||
log_info "Generating Sigstore key pair for: $key_name" "apt-layer"
|
||||
|
||||
# Generate cosign key pair
|
||||
if cosign generate-key-pair --output-key-prefix "$key_dir/key" 2>/dev/null; then
|
||||
# Store key metadata
|
||||
local key_db="$LAYER_SIGNING_KEYS_DIR/keys.json"
|
||||
local key_id
|
||||
key_id=$(cosign public-key --key "$key_dir/key.key" 2>/dev/null | sha256sum | cut -d' ' -f1 || echo "unknown")
|
||||
|
||||
jq --arg name "$key_name" \
|
||||
--arg type "sigstore" \
|
||||
--arg public_key "$key_dir/key.pub" \
|
||||
--arg private_key "$key_dir/key.key" \
|
||||
--arg key_id "$key_id" \
|
||||
--arg created "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
'.key_pairs[$name] = {
|
||||
"type": $type,
|
||||
"public_key": $public_key,
|
||||
"private_key": $private_key,
|
||||
"key_id": $key_id,
|
||||
"created": $created,
|
||||
"status": "active"
|
||||
}' "$key_db" > "$key_db.tmp" && mv "$key_db.tmp" "$key_db"
|
||||
|
||||
chmod 600 "$key_dir/key.key"
|
||||
chmod 644 "$key_dir/key.pub"
|
||||
|
||||
log_success "Sigstore key pair generated: $key_name" "apt-layer"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to generate Sigstore key pair: $key_name" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Generate GPG key pair
|
||||
generate_gpg_key_pair() {
|
||||
local key_name="$1"
|
||||
local key_dir="$LAYER_SIGNING_KEYS_DIR/gpg/$key_name"
|
||||
|
||||
mkdir -p "$key_dir"
|
||||
|
||||
log_info "Generating GPG key pair for: $key_name" "apt-layer"
|
||||
|
||||
# Create GPG key configuration
|
||||
cat > "$key_dir/key-config" << EOF
|
||||
Key-Type: RSA
|
||||
Key-Length: 4096
|
||||
Name-Real: apt-layer signing key
|
||||
Name-Email: apt-layer@$(hostname)
|
||||
Name-Comment: $key_name
|
||||
Expire-Date: 2y
|
||||
%commit
|
||||
EOF
|
||||
|
||||
# Generate GPG key
|
||||
if gpg --batch --gen-key "$key_dir/key-config" 2>/dev/null; then
|
||||
# Export public key
|
||||
gpg --armor --export apt-layer@$(hostname) > "$key_dir/public.key" 2>/dev/null
|
||||
|
||||
# Get key fingerprint
|
||||
local key_fingerprint
|
||||
key_fingerprint=$(gpg --fingerprint apt-layer@$(hostname) 2>/dev/null | grep "Key fingerprint" | sed 's/.*= //' | tr -d ' ')
|
||||
|
||||
# Store key metadata
|
||||
local key_db="$LAYER_SIGNING_KEYS_DIR/keys.json"
|
||||
|
||||
jq --arg name "$key_name" \
|
||||
--arg type "gpg" \
|
||||
--arg public_key "$key_dir/public.key" \
|
||||
--arg key_id "$key_fingerprint" \
|
||||
--arg email "apt-layer@$(hostname)" \
|
||||
--arg created "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
'.key_pairs[$name] = {
|
||||
"type": $type,
|
||||
"public_key": $public_key,
|
||||
"key_id": $key_id,
|
||||
"email": $email,
|
||||
"created": $created,
|
||||
"status": "active"
|
||||
}' "$key_db" > "$key_db.tmp" && mv "$key_db.tmp" "$key_db"
|
||||
|
||||
chmod 600 "$key_dir/key-config"
|
||||
chmod 644 "$key_dir/public.key"
|
||||
|
||||
log_success "GPG key pair generated: $key_name" "apt-layer"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to generate GPG key pair: $key_name" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Sign layer with specified method
|
||||
sign_layer() {
|
||||
local layer_path="$1"
|
||||
local key_name="$2"
|
||||
local signing_method="${3:-$LAYER_SIGNING_METHOD}"
|
||||
|
||||
if [[ -z "$layer_path" ]] || [[ -z "$key_name" ]]; then
|
||||
log_error "Layer path and key name required for signing" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$layer_path" ]]; then
|
||||
log_error "Layer file not found: $layer_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Signing layer: $layer_path with key: $key_name (method: $signing_method)" "apt-layer"
|
||||
|
||||
case "$signing_method" in
|
||||
"sigstore")
|
||||
sign_layer_sigstore "$layer_path" "$key_name"
|
||||
;;
|
||||
"gpg")
|
||||
sign_layer_gpg "$layer_path" "$key_name"
|
||||
;;
|
||||
"both")
|
||||
sign_layer_sigstore "$layer_path" "$key_name" && \
|
||||
sign_layer_gpg "$layer_path" "$key_name"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported signing method: $signing_method" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Sign layer with Sigstore
|
||||
sign_layer_sigstore() {
|
||||
local layer_path="$1"
|
||||
local key_name="$2"
|
||||
local key_dir="$LAYER_SIGNING_KEYS_DIR/sigstore/$key_name"
|
||||
local signature_path="$layer_path.sig"
|
||||
|
||||
if [[ ! -f "$key_dir/key.key" ]]; then
|
||||
log_error "Sigstore private key not found: $key_dir/key.key" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Signing layer with Sigstore: $layer_path" "apt-layer"
|
||||
|
||||
# Sign the layer
|
||||
if cosign sign-blob --key "$key_dir/key.key" --output-signature "$signature_path" "$layer_path" 2>/dev/null; then
|
||||
# Store signature metadata
|
||||
local signature_db="$LAYER_SIGNING_SIGNATURES_DIR/signatures.json"
|
||||
|
||||
if [[ ! -f "$signature_db" ]]; then
|
||||
cat > "$signature_db" << EOF
|
||||
{
|
||||
"signatures": {},
|
||||
"last_updated": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
local layer_hash
|
||||
layer_hash=$(sha256sum "$layer_path" | cut -d' ' -f1)
|
||||
|
||||
jq --arg layer "$layer_path" \
|
||||
--arg signature "$signature_path" \
|
||||
--arg method "sigstore" \
|
||||
--arg key_name "$key_name" \
|
||||
--arg layer_hash "$layer_hash" \
|
||||
--arg signed_at "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
'.signatures[$layer] = {
|
||||
"signature_file": $signature,
|
||||
"method": $method,
|
||||
"key_name": $key_name,
|
||||
"layer_hash": $layer_hash,
|
||||
"signed_at": $signed_at,
|
||||
"status": "valid"
|
||||
}' "$signature_db" > "$signature_db.tmp" && mv "$signature_db.tmp" "$signature_db"
|
||||
|
||||
log_success "Layer signed with Sigstore: $layer_path" "apt-layer"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to sign layer with Sigstore: $layer_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Sign layer with GPG
|
||||
sign_layer_gpg() {
|
||||
local layer_path="$1"
|
||||
local key_name="$2"
|
||||
local signature_path="$layer_path.sig"
|
||||
|
||||
log_info "Signing layer with GPG: $layer_path" "apt-layer"
|
||||
|
||||
# Sign the layer
|
||||
if gpg --detach-sign --armor --output "$signature_path" "$layer_path" 2>/dev/null; then
|
||||
# Store signature metadata
|
||||
local signature_db="$LAYER_SIGNING_SIGNATURES_DIR/signatures.json"
|
||||
|
||||
if [[ ! -f "$signature_db" ]]; then
|
||||
cat > "$signature_db" << EOF
|
||||
{
|
||||
"signatures": {},
|
||||
"last_updated": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
local layer_hash
|
||||
layer_hash=$(sha256sum "$layer_path" | cut -d' ' -f1)
|
||||
|
||||
jq --arg layer "$layer_path" \
|
||||
--arg signature "$signature_path" \
|
||||
--arg method "gpg" \
|
||||
--arg key_name "$key_name" \
|
||||
--arg layer_hash "$layer_hash" \
|
||||
--arg signed_at "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
'.signatures[$layer] = {
|
||||
"signature_file": $signature,
|
||||
"method": $method,
|
||||
"key_name": $key_name,
|
||||
"layer_hash": $layer_hash,
|
||||
"signed_at": $signed_at,
|
||||
"status": "valid"
|
||||
}' "$signature_db" > "$signature_db.tmp" && mv "$signature_db.tmp" "$signature_db"
|
||||
|
||||
log_success "Layer signed with GPG: $layer_path" "apt-layer"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to sign layer with GPG: $layer_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify layer signature
|
||||
verify_layer_signature() {
|
||||
local layer_path="$1"
|
||||
local signature_path="$2"
|
||||
local verification_method="${3:-auto}"
|
||||
|
||||
if [[ -z "$layer_path" ]] || [[ -z "$signature_path" ]]; then
|
||||
log_error "Layer path and signature path required for verification" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$layer_path" ]]; then
|
||||
log_error "Layer file not found: $layer_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$signature_path" ]]; then
|
||||
log_error "Signature file not found: $signature_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Verifying layer signature: $layer_path" "apt-layer"
|
||||
|
||||
# Auto-detect verification method
|
||||
if [[ "$verification_method" == "auto" ]]; then
|
||||
if [[ "$signature_path" == *.sig ]] && head -1 "$signature_path" | grep -q "-----BEGIN PGP SIGNATURE-----"; then
|
||||
verification_method="gpg"
|
||||
else
|
||||
verification_method="sigstore"
|
||||
fi
|
||||
fi
|
||||
|
||||
case "$verification_method" in
|
||||
"sigstore")
|
||||
verify_layer_sigstore "$layer_path" "$signature_path"
|
||||
;;
|
||||
"gpg")
|
||||
verify_layer_gpg "$layer_path" "$signature_path"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported verification method: $verification_method" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Verify layer with Sigstore
|
||||
verify_layer_sigstore() {
|
||||
local layer_path="$1"
|
||||
local signature_path="$2"
|
||||
local key_dir="$LAYER_SIGNING_KEYS_DIR/sigstore"
|
||||
|
||||
log_info "Verifying layer with Sigstore: $layer_path" "apt-layer"
|
||||
|
||||
# Find the public key
|
||||
local public_key=""
|
||||
for key_name in "$key_dir"/*/key.pub; do
|
||||
if [[ -f "$key_name" ]]; then
|
||||
public_key="$key_name"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ -z "$public_key" ]]; then
|
||||
log_error "No Sigstore public key found for verification" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify the signature
|
||||
if cosign verify-blob --key "$public_key" --signature "$signature_path" "$layer_path" 2>/dev/null; then
|
||||
log_success "Layer signature verified with Sigstore: $layer_path" "apt-layer"
|
||||
return 0
|
||||
else
|
||||
log_error "Layer signature verification failed with Sigstore: $layer_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify layer with GPG
|
||||
verify_layer_gpg() {
|
||||
local layer_path="$1"
|
||||
local signature_path="$2"
|
||||
|
||||
log_info "Verifying layer with GPG: $layer_path" "apt-layer"
|
||||
|
||||
# Verify the signature
|
||||
if gpg --verify "$signature_path" "$layer_path" 2>/dev/null; then
|
||||
log_success "Layer signature verified with GPG: $layer_path" "apt-layer"
|
||||
return 0
|
||||
else
|
||||
log_error "Layer signature verification failed with GPG: $layer_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if layer is revoked
|
||||
check_layer_revocation() {
|
||||
local layer_path="$1"
|
||||
|
||||
if [[ -z "$layer_path" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
local revocation_list="$LAYER_SIGNING_REVOCATION_DIR/revocation-list.json"
|
||||
|
||||
if [[ ! -f "$revocation_list" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
local layer_hash
|
||||
layer_hash=$(sha256sum "$layer_path" 2>/dev/null | cut -d' ' -f1 || echo "")
|
||||
|
||||
if [[ -n "$layer_hash" ]]; then
|
||||
if jq -e ".revoked_layers[\"$layer_hash\"]" "$revocation_list" >/dev/null 2>&1; then
|
||||
log_warning "Layer is revoked: $layer_path" "apt-layer"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Revoke layer
|
||||
revoke_layer() {
|
||||
local layer_path="$1"
|
||||
local reason="${2:-Manual revocation}"
|
||||
|
||||
if [[ -z "$layer_path" ]]; then
|
||||
log_error "Layer path required for revocation" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$layer_path" ]]; then
|
||||
log_error "Layer file not found: $layer_path" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Revoking layer: $layer_path" "apt-layer"
|
||||
|
||||
local revocation_list="$LAYER_SIGNING_REVOCATION_DIR/revocation-list.json"
|
||||
local layer_hash
|
||||
layer_hash=$(sha256sum "$layer_path" | cut -d' ' -f1)
|
||||
|
||||
jq --arg layer_hash "$layer_hash" \
|
||||
--arg reason "$reason" \
|
||||
--arg revoked_at "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
'.revoked_layers[$layer_hash] = {
|
||||
"reason": $reason,
|
||||
"revoked_at": $revoked_at,
|
||||
"revoked_by": "'$(whoami)'"
|
||||
}' "$revocation_list" > "$revocation_list.tmp" && mv "$revocation_list.tmp" "$revocation_list"
|
||||
|
||||
log_success "Layer revoked: $layer_path" "apt-layer"
|
||||
return 0
|
||||
}
|
||||
|
||||
# List signing keys
|
||||
list_signing_keys() {
|
||||
log_info "Listing signing keys" "apt-layer"
|
||||
|
||||
local key_db="$LAYER_SIGNING_KEYS_DIR/keys.json"
|
||||
|
||||
if [[ ! -f "$key_db" ]]; then
|
||||
log_error "Key database not found" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "=== Signing Keys ==="
|
||||
|
||||
local keys
|
||||
keys=$(jq -r '.key_pairs | to_entries[] | "\(.key): \(.value.type) - \(.value.key_id) (\(.value.status))"' "$key_db" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -n "$keys" ]]; then
|
||||
echo "$keys" | while read -r key_info; do
|
||||
echo " $key_info"
|
||||
done
|
||||
else
|
||||
log_info "No signing keys found" "apt-layer"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# List layer signatures
|
||||
list_layer_signatures() {
|
||||
log_info "Listing layer signatures" "apt-layer"
|
||||
|
||||
local signature_db="$LAYER_SIGNING_SIGNATURES_DIR/signatures.json"
|
||||
|
||||
if [[ ! -f "$signature_db" ]]; then
|
||||
log_error "Signature database not found" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "=== Layer Signatures ==="
|
||||
|
||||
local signatures
|
||||
signatures=$(jq -r '.signatures | to_entries[] | "\(.key): \(.value.method) - \(.value.key_name) (\(.value.status))"' "$signature_db" 2>/dev/null || echo "")
|
||||
|
||||
if [[ -n "$signatures" ]]; then
|
||||
echo "$signatures" | while read -r sig_info; do
|
||||
echo " $sig_info"
|
||||
done
|
||||
else
|
||||
log_info "No layer signatures found" "apt-layer"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Get layer signing status
|
||||
get_layer_signing_status() {
|
||||
local layer_path="$1"
|
||||
|
||||
if [[ -z "$layer_path" ]]; then
|
||||
log_error "Layer path required for status check" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Getting signing status for layer: $layer_path" "apt-layer"
|
||||
|
||||
echo "=== Layer Signing Status: $layer_path ==="
|
||||
|
||||
# Check if layer exists
|
||||
if [[ ! -f "$layer_path" ]]; then
|
||||
echo " â Layer file not found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " â Layer file exists"
|
||||
|
||||
# Check for signatures
|
||||
local signature_db="$LAYER_SIGNING_SIGNATURES_DIR/signatures.json"
|
||||
if [[ -f "$signature_db" ]]; then
|
||||
local signature_info
|
||||
signature_info=$(jq -r ".signatures[\"$layer_path\"] // empty" "$signature_db" 2>/dev/null)
|
||||
|
||||
if [[ -n "$signature_info" ]]; then
|
||||
local method
|
||||
method=$(echo "$signature_info" | jq -r '.method // "unknown"')
|
||||
local key_name
|
||||
key_name=$(echo "$signature_info" | jq -r '.key_name // "unknown"')
|
||||
local status
|
||||
status=$(echo "$signature_info" | jq -r '.status // "unknown"')
|
||||
local signed_at
|
||||
signed_at=$(echo "$signature_info" | jq -r '.signed_at // "unknown"')
|
||||
|
||||
echo " â Signed with $method using key: $key_name"
|
||||
echo " â Signature status: $status"
|
||||
echo " â Signed at: $signed_at"
|
||||
else
|
||||
echo " â No signature found"
|
||||
fi
|
||||
else
|
||||
echo " â Signature database not found"
|
||||
fi
|
||||
|
||||
# Check for revocation
|
||||
if check_layer_revocation "$layer_path"; then
|
||||
echo " â Layer is revoked"
|
||||
else
|
||||
echo " â Layer is not revoked"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# INTEGRATION FUNCTIONS
|
||||
# =============================================================================
|
||||
|
||||
# Initialize layer signing on script startup
|
||||
init_layer_signing_on_startup() {
|
||||
# Only initialize if not already done and signing is enabled
|
||||
if [[ "$LAYER_SIGNING_ENABLED" == "true" ]] && [[ ! -d "$LAYER_SIGNING_STATE_DIR" ]]; then
|
||||
init_layer_signing
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify layer before import
|
||||
verify_layer_before_import() {
|
||||
local layer_path="$1"
|
||||
|
||||
if [[ "$LAYER_SIGNING_VERIFY_ON_IMPORT" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ -z "$layer_path" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Verifying layer before import: $layer_path" "apt-layer"
|
||||
|
||||
# Check for revocation first
|
||||
if check_layer_revocation "$layer_path"; then
|
||||
if [[ "$LAYER_SIGNING_FAIL_ON_VERIFY" == "true" ]]; then
|
||||
log_error "Layer is revoked, import blocked: $layer_path" "apt-layer"
|
||||
return 1
|
||||
else
|
||||
log_warning "Layer is revoked but import allowed: $layer_path" "apt-layer"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check for signature
|
||||
local signature_path="$layer_path.sig"
|
||||
if [[ -f "$signature_path" ]]; then
|
||||
if ! verify_layer_signature "$layer_path" "$signature_path"; then
|
||||
if [[ "$LAYER_SIGNING_FAIL_ON_VERIFY" == "true" ]]; then
|
||||
log_error "Layer signature verification failed, import blocked: $layer_path" "apt-layer"
|
||||
return 1
|
||||
else
|
||||
log_warning "Layer signature verification failed but import allowed: $layer_path" "apt-layer"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
log_warning "No signature found for layer: $layer_path" "apt-layer"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Verify layer before mount
|
||||
verify_layer_before_mount() {
|
||||
local layer_path="$1"
|
||||
|
||||
if [[ "$LAYER_SIGNING_VERIFY_ON_MOUNT" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ -z "$layer_path" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Verifying layer before mount: $layer_path" "apt-layer"
|
||||
|
||||
# Check for revocation
|
||||
if check_layer_revocation "$layer_path"; then
|
||||
if [[ "$LAYER_SIGNING_FAIL_ON_VERIFY" == "true" ]]; then
|
||||
log_error "Layer is revoked, mount blocked: $layer_path" "apt-layer"
|
||||
return 1
|
||||
else
|
||||
log_warning "Layer is revoked but mount allowed: $layer_path" "apt-layer"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check for signature
|
||||
local signature_path="$layer_path.sig"
|
||||
if [[ -f "$signature_path" ]]; then
|
||||
if ! verify_layer_signature "$layer_path" "$signature_path"; then
|
||||
if [[ "$LAYER_SIGNING_FAIL_ON_VERIFY" == "true" ]]; then
|
||||
log_error "Layer signature verification failed, mount blocked: $layer_path" "apt-layer"
|
||||
return 1
|
||||
else
|
||||
log_warning "Layer signature verification failed but mount allowed: $layer_path" "apt-layer"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
log_warning "No signature found for layer: $layer_path" "apt-layer"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Verify layer before activation
|
||||
verify_layer_before_activation() {
|
||||
local layer_path="$1"
|
||||
|
||||
if [[ "$LAYER_SIGNING_VERIFY_ON_ACTIVATE" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ -z "$layer_path" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Verifying layer before activation: $layer_path" "apt-layer"
|
||||
|
||||
# Check for revocation
|
||||
if check_layer_revocation "$layer_path"; then
|
||||
if [[ "$LAYER_SIGNING_FAIL_ON_VERIFY" == "true" ]]; then
|
||||
log_error "Layer is revoked, activation blocked: $layer_path" "apt-layer"
|
||||
return 1
|
||||
else
|
||||
log_warning "Layer is revoked but activation allowed: $layer_path" "apt-layer"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check for signature
|
||||
local signature_path="$layer_path.sig"
|
||||
if [[ -f "$signature_path" ]]; then
|
||||
if ! verify_layer_signature "$layer_path" "$signature_path"; then
|
||||
if [[ "$LAYER_SIGNING_FAIL_ON_VERIFY" == "true" ]]; then
|
||||
log_error "Layer signature verification failed, activation blocked: $layer_path" "apt-layer"
|
||||
return 1
|
||||
else
|
||||
log_warning "Layer signature verification failed but activation allowed: $layer_path" "apt-layer"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
log_warning "No signature found for layer: $layer_path" "apt-layer"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Cleanup layer signing on script exit
|
||||
cleanup_layer_signing_on_exit() {
|
||||
# Clean up temporary files
|
||||
rm -f "$LAYER_SIGNING_VERIFICATION_DIR"/temp-* 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Register cleanup function
|
||||
trap cleanup_layer_signing_on_exit EXIT
|
||||
|
|
@ -1,769 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Ubuntu uBlue apt-layer Centralized Audit & Reporting
|
||||
# Provides enterprise-grade audit logging, reporting, and compliance features
|
||||
# for comprehensive security monitoring and regulatory compliance
|
||||
|
||||
# =============================================================================
|
||||
# AUDIT & REPORTING FUNCTIONS
|
||||
# =============================================================================
|
||||
|
||||
# Audit and reporting configuration (with fallbacks for when particle-config.sh is not loaded)
|
||||
AUDIT_CONFIG_DIR="${UBLUE_CONFIG_DIR:-/etc/ubuntu-ublue}/audit"
|
||||
AUDIT_STATE_DIR="${UBLUE_ROOT:-/var/lib/particle-os}/audit"
|
||||
AUDIT_LOGS_DIR="$AUDIT_STATE_DIR/logs"
|
||||
AUDIT_REPORTS_DIR="$AUDIT_STATE_DIR/reports"
|
||||
AUDIT_EXPORTS_DIR="$AUDIT_STATE_DIR/exports"
|
||||
AUDIT_QUERIES_DIR="$AUDIT_STATE_DIR/queries"
|
||||
AUDIT_COMPLIANCE_DIR="$AUDIT_STATE_DIR/compliance"
|
||||
|
||||
# Audit configuration
|
||||
AUDIT_ENABLED="${AUDIT_ENABLED:-true}"
|
||||
AUDIT_LOG_LEVEL="${AUDIT_LOG_LEVEL:-INFO}"
|
||||
AUDIT_RETENTION_DAYS="${AUDIT_RETENTION_DAYS:-90}"
|
||||
AUDIT_ROTATION_SIZE_MB="${AUDIT_ROTATION_SIZE_MB:-100}"
|
||||
AUDIT_REMOTE_SHIPPING="${AUDIT_REMOTE_SHIPPING:-false}"
|
||||
AUDIT_SYSLOG_ENABLED="${AUDIT_SYSLOG_ENABLED:-false}"
|
||||
AUDIT_HTTP_ENDPOINT="${AUDIT_HTTP_ENDPOINT:-}"
|
||||
AUDIT_HTTP_API_KEY="${AUDIT_HTTP_API_KEY:-}"
|
||||
|
||||
# Initialize audit and reporting system
|
||||
init_audit_reporting() {
|
||||
log_info "Initializing centralized audit and reporting system" "apt-layer"
|
||||
|
||||
# Create audit and reporting directories
|
||||
mkdir -p "$AUDIT_CONFIG_DIR" "$AUDIT_STATE_DIR" "$AUDIT_LOGS_DIR"
|
||||
mkdir -p "$AUDIT_REPORTS_DIR" "$AUDIT_EXPORTS_DIR" "$AUDIT_QUERIES_DIR"
|
||||
mkdir -p "$AUDIT_COMPLIANCE_DIR"
|
||||
|
||||
# Set proper permissions
|
||||
chmod 755 "$AUDIT_CONFIG_DIR" "$AUDIT_STATE_DIR"
|
||||
chmod 750 "$AUDIT_LOGS_DIR" "$AUDIT_REPORTS_DIR" "$AUDIT_EXPORTS_DIR"
|
||||
chmod 700 "$AUDIT_QUERIES_DIR" "$AUDIT_COMPLIANCE_DIR"
|
||||
|
||||
# Initialize audit configuration
|
||||
init_audit_config
|
||||
|
||||
# Initialize audit log rotation
|
||||
init_audit_log_rotation
|
||||
|
||||
# Initialize compliance templates
|
||||
init_compliance_templates
|
||||
|
||||
# Initialize query cache
|
||||
init_query_cache
|
||||
|
||||
log_success "Centralized audit and reporting system initialized" "apt-layer"
|
||||
}
|
||||
|
||||
# Initialize audit configuration
|
||||
init_audit_config() {
|
||||
local config_file="$AUDIT_CONFIG_DIR/audit-config.json"
|
||||
|
||||
if [[ ! -f "$config_file" ]]; then
|
||||
cat > "$config_file" << 'EOF'
|
||||
{
|
||||
"audit": {
|
||||
"enabled": true,
|
||||
"log_level": "INFO",
|
||||
"retention_days": 90,
|
||||
"rotation_size_mb": 100,
|
||||
"compression_enabled": true
|
||||
},
|
||||
"remote_shipping": {
|
||||
"enabled": false,
|
||||
"syslog_enabled": false,
|
||||
"syslog_facility": "local0",
|
||||
"http_endpoint": "",
|
||||
"http_api_key": "",
|
||||
"http_timeout": 30,
|
||||
"retry_attempts": 3
|
||||
},
|
||||
"compliance": {
|
||||
"sox_enabled": false,
|
||||
"pci_dss_enabled": false,
|
||||
"hipaa_enabled": false,
|
||||
"gdpr_enabled": false,
|
||||
"custom_frameworks": []
|
||||
},
|
||||
"reporting": {
|
||||
"auto_generate_reports": false,
|
||||
"report_schedule": "weekly",
|
||||
"export_formats": ["json", "csv", "html"],
|
||||
"include_sensitive_data": false
|
||||
},
|
||||
"alerts": {
|
||||
"enabled": false,
|
||||
"critical_events": ["SECURITY_VIOLATION", "POLICY_VIOLATION"],
|
||||
"notification_methods": ["email", "webhook"],
|
||||
"email_recipients": [],
|
||||
"webhook_url": ""
|
||||
}
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$config_file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize audit log rotation
|
||||
init_audit_log_rotation() {
|
||||
local logrotate_config="$AUDIT_CONFIG_DIR/logrotate.conf"
|
||||
|
||||
if [[ ! -f "$logrotate_config" ]]; then
|
||||
cat > "$logrotate_config" << 'EOF'
|
||||
$AUDIT_LOGS_DIR/*.log {
|
||||
daily
|
||||
rotate 90
|
||||
compress
|
||||
delaycompress
|
||||
missingok
|
||||
notifempty
|
||||
create 640 root root
|
||||
postrotate
|
||||
systemctl reload rsyslog > /dev/null 2>&1 || true
|
||||
endscript
|
||||
}
|
||||
EOF
|
||||
chmod 644 "$logrotate_config"
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize compliance templates
|
||||
init_compliance_templates() {
|
||||
# SOX compliance template
|
||||
local sox_template="$AUDIT_COMPLIANCE_DIR/sox-template.json"
|
||||
if [[ ! -f "$sox_template" ]]; then
|
||||
cat > "$sox_template" << 'EOF'
|
||||
{
|
||||
"framework": "SOX",
|
||||
"version": "2002",
|
||||
"requirements": {
|
||||
"access_control": {
|
||||
"user_management": true,
|
||||
"role_based_access": true,
|
||||
"privilege_escalation": true
|
||||
},
|
||||
"change_management": {
|
||||
"package_installation": true,
|
||||
"system_modifications": true,
|
||||
"deployment_approval": true
|
||||
},
|
||||
"audit_trail": {
|
||||
"comprehensive_logging": true,
|
||||
"log_integrity": true,
|
||||
"log_retention": true
|
||||
}
|
||||
},
|
||||
"reporting_periods": ["daily", "weekly", "monthly", "quarterly"]
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
# PCI DSS compliance template
|
||||
local pci_template="$AUDIT_COMPLIANCE_DIR/pci-dss-template.json"
|
||||
if [[ ! -f "$pci_template" ]]; then
|
||||
cat > "$pci_template" << 'EOF'
|
||||
{
|
||||
"framework": "PCI-DSS",
|
||||
"version": "4.0",
|
||||
"requirements": {
|
||||
"access_control": {
|
||||
"unique_user_ids": true,
|
||||
"role_based_access": true,
|
||||
"privilege_minimization": true
|
||||
},
|
||||
"security_monitoring": {
|
||||
"audit_logging": true,
|
||||
"intrusion_detection": true,
|
||||
"vulnerability_scanning": true
|
||||
},
|
||||
"change_management": {
|
||||
"change_approval": true,
|
||||
"testing_procedures": true,
|
||||
"rollback_capabilities": true
|
||||
}
|
||||
},
|
||||
"reporting_periods": ["daily", "weekly", "monthly"]
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize query cache
|
||||
init_query_cache() {
|
||||
local query_cache="$AUDIT_QUERIES_DIR/query-cache.json"
|
||||
|
||||
if [[ ! -f "$query_cache" ]]; then
|
||||
cat > "$query_cache" << 'EOF'
|
||||
{
|
||||
"queries": {},
|
||||
"cached_results": {},
|
||||
"last_updated": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$query_cache"
|
||||
fi
|
||||
}
|
||||
|
||||
# Enhanced audit logging function
|
||||
log_audit_event() {
|
||||
local event_type="$1"
|
||||
local event_data="$2"
|
||||
local severity="${3:-INFO}"
|
||||
local user="${4:-$(whoami)}"
|
||||
local session_id="${5:-$(echo $$)}"
|
||||
|
||||
if [[ "$AUDIT_ENABLED" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Create structured audit event
|
||||
local audit_event
|
||||
audit_event=$(cat << 'EOF'
|
||||
{
|
||||
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"event_type": "$event_type",
|
||||
"severity": "$severity",
|
||||
"user": "$user",
|
||||
"session_id": "$session_id",
|
||||
"hostname": "$(hostname)",
|
||||
"data": $event_data
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# Write to local audit log
|
||||
local audit_log="$AUDIT_LOGS_DIR/audit.log"
|
||||
echo "$audit_event" >> "$audit_log"
|
||||
|
||||
# Ship to remote destinations if enabled
|
||||
ship_audit_event "$audit_event"
|
||||
|
||||
# Log to syslog if enabled
|
||||
if [[ "$AUDIT_SYSLOG_ENABLED" == "true" ]]; then
|
||||
logger -t "apt-layer-audit" -p "local0.info" "$audit_event"
|
||||
fi
|
||||
}
|
||||
|
||||
# Ship audit event to remote destinations
|
||||
ship_audit_event() {
|
||||
local audit_event="$1"
|
||||
|
||||
# Ship to HTTP endpoint if configured
|
||||
if [[ -n "$AUDIT_HTTP_ENDPOINT" ]] && [[ -n "$AUDIT_HTTP_API_KEY" ]]; then
|
||||
ship_to_http_endpoint "$audit_event" &
|
||||
fi
|
||||
|
||||
# Ship to syslog if enabled
|
||||
if [[ "$AUDIT_SYSLOG_ENABLED" == "true" ]]; then
|
||||
ship_to_syslog "$audit_event" &
|
||||
fi
|
||||
}
|
||||
|
||||
# Ship audit event to HTTP endpoint
|
||||
ship_to_http_endpoint() {
|
||||
local audit_event="$1"
|
||||
local config_file="$AUDIT_CONFIG_DIR/audit-config.json"
|
||||
|
||||
local endpoint
|
||||
endpoint=$(jq -r '.remote_shipping.http_endpoint' "$config_file" 2>/dev/null || echo "$AUDIT_HTTP_ENDPOINT")
|
||||
local api_key
|
||||
api_key=$(jq -r '.remote_shipping.http_api_key' "$config_file" 2>/dev/null || echo "$AUDIT_HTTP_API_KEY")
|
||||
local timeout
|
||||
timeout=$(jq -r '.remote_shipping.http_timeout // 30' "$config_file" 2>/dev/null || echo "30")
|
||||
local retry_attempts
|
||||
retry_attempts=$(jq -r '.remote_shipping.retry_attempts // 3' "$config_file" 2>/dev/null || echo "3")
|
||||
|
||||
if [[ -z "$endpoint" ]] || [[ -z "$api_key" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
local attempt=0
|
||||
while [[ $attempt -lt $retry_attempts ]]; do
|
||||
if curl -s -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer $api_key" \
|
||||
-H "User-Agent: apt-layer-audit/1.0" \
|
||||
--data "$audit_event" \
|
||||
--connect-timeout "$timeout" \
|
||||
"$endpoint" >/dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
((attempt++))
|
||||
if [[ $attempt -lt $retry_attempts ]]; then
|
||||
sleep $((attempt * 2)) # Exponential backoff
|
||||
fi
|
||||
done
|
||||
|
||||
log_warning "Failed to ship audit event to HTTP endpoint after $retry_attempts attempts" "apt-layer"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Ship audit event to syslog
|
||||
ship_to_syslog() {
|
||||
local audit_event="$1"
|
||||
local config_file="$AUDIT_CONFIG_DIR/audit-config.json"
|
||||
|
||||
local facility
|
||||
facility=$(jq -r '.remote_shipping.syslog_facility // "local0"' "$config_file" 2>/dev/null || echo "local0")
|
||||
|
||||
logger -t "apt-layer-audit" -p "$facility.info" "$audit_event"
|
||||
}
|
||||
|
||||
# Query audit logs
|
||||
query_audit_logs() {
|
||||
local query_params=("$@")
|
||||
local output_format="${query_params[0]:-json}"
|
||||
local filters=("${query_params[@]:1}")
|
||||
|
||||
log_info "Querying audit logs with format: $output_format" "apt-layer"
|
||||
|
||||
local audit_log="$AUDIT_LOGS_DIR/audit.log"
|
||||
if [[ ! -f "$audit_log" ]]; then
|
||||
log_error "Audit log not found" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Build jq filter from parameters
|
||||
local jq_filter="."
|
||||
for filter in "${filters[@]}"; do
|
||||
case "$filter" in
|
||||
--user=*)
|
||||
local user="${filter#--user=}"
|
||||
jq_filter="$jq_filter | select(.user == \"$user\")"
|
||||
;;
|
||||
--event-type=*)
|
||||
local event_type="${filter#--event-type=}"
|
||||
jq_filter="$jq_filter | select(.event_type == \"$event_type\")"
|
||||
;;
|
||||
--severity=*)
|
||||
local severity="${filter#--severity=}"
|
||||
jq_filter="$jq_filter | select(.severity == \"$severity\")"
|
||||
;;
|
||||
--since=*)
|
||||
local since="${filter#--since=}"
|
||||
jq_filter="$jq_filter | select(.timestamp >= \"$since\")"
|
||||
;;
|
||||
--until=*)
|
||||
local until="${filter#--until=}"
|
||||
jq_filter="$jq_filter | select(.timestamp <= \"$until\")"
|
||||
;;
|
||||
--limit=*)
|
||||
local limit="${filter#--limit=}"
|
||||
jq_filter="$jq_filter | head -n $limit"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Execute query
|
||||
case "$output_format" in
|
||||
"json")
|
||||
jq -s "$jq_filter" "$audit_log" 2>/dev/null || echo "[]"
|
||||
;;
|
||||
"csv")
|
||||
echo "timestamp,event_type,severity,user,session_id,hostname,data"
|
||||
jq -r "$jq_filter | .[] | [.timestamp, .event_type, .severity, .user, .session_id, .hostname, .data] | @csv" "$audit_log" 2>/dev/null || true
|
||||
;;
|
||||
"table")
|
||||
echo "Timestamp | Event Type | Severity | User | Session ID | Hostname"
|
||||
echo "----------|------------|----------|------|------------|----------"
|
||||
jq -r "$jq_filter | .[] | \"\(.timestamp) | \(.event_type) | \(.severity) | \(.user) | \(.session_id) | \(.hostname)\"" "$audit_log" 2>/dev/null || true
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported output format: $output_format" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Export audit logs
|
||||
export_audit_logs() {
|
||||
local export_format="$1"
|
||||
local output_file="$2"
|
||||
local filters=("${@:3}")
|
||||
|
||||
if [[ -z "$export_format" ]]; then
|
||||
log_error "Export format required" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ -z "$output_file" ]]; then
|
||||
output_file="$AUDIT_EXPORTS_DIR/audit-export-$(date +%Y%m%d-%H%M%S).$export_format"
|
||||
fi
|
||||
|
||||
log_info "Exporting audit logs to: $output_file" "apt-layer"
|
||||
|
||||
# Create exports directory if it doesn't exist
|
||||
mkdir -p "$(dirname "$output_file")"
|
||||
|
||||
# Export with filters
|
||||
if query_audit_logs "$export_format" "${filters[@]}" > "$output_file"; then
|
||||
log_success "Audit logs exported to: $output_file" "apt-layer"
|
||||
log_audit_event "EXPORT_AUDIT_LOGS" "{\"format\": \"$export_format\", \"file\": \"$output_file\", \"filters\": $(printf '%s\n' "${filters[@]}" | jq -R . | jq -s .)}"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to export audit logs" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Generate compliance report
|
||||
generate_compliance_report() {
|
||||
local framework="$1"
|
||||
local report_period="${2:-monthly}"
|
||||
local output_format="${3:-html}"
|
||||
|
||||
if [[ -z "$framework" ]]; then
|
||||
log_error "Compliance framework required" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Generating $framework compliance report for period: $report_period" "apt-layer"
|
||||
|
||||
local template_file="$AUDIT_COMPLIANCE_DIR/${framework,,}-template.json"
|
||||
if [[ ! -f "$template_file" ]]; then
|
||||
log_error "Compliance template not found: $template_file" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local report_file="$AUDIT_REPORTS_DIR/${framework,,}-compliance-$(date +%Y%m%d-%H%M%S).$output_format"
|
||||
|
||||
# Generate report based on framework
|
||||
case "$framework" in
|
||||
"SOX"|"sox")
|
||||
generate_sox_report "$template_file" "$report_period" "$output_format" "$report_file"
|
||||
;;
|
||||
"PCI-DSS"|"pci_dss")
|
||||
generate_pci_dss_report "$template_file" "$report_period" "$output_format" "$report_file"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported compliance framework: $framework" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
log_success "Compliance report generated: $report_file" "apt-layer"
|
||||
log_audit_event "GENERATE_COMPLIANCE_REPORT" "{\"framework\": \"$framework\", \"period\": \"$report_period\", \"format\": \"$output_format\", \"file\": \"$report_file\"}"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Generate SOX compliance report
|
||||
generate_sox_report() {
|
||||
local template_file="$1"
|
||||
local report_period="$2"
|
||||
local output_format="$3"
|
||||
local report_file="$4"
|
||||
|
||||
# Query relevant audit events
|
||||
local access_control_events
|
||||
access_control_events=$(query_audit_logs json --event-type=USER_ADD --event-type=USER_REMOVE --event-type=PERMISSION_CHECK)
|
||||
|
||||
local change_management_events
|
||||
change_management_events=$(query_audit_logs json --event-type=INSTALL_SUCCESS --event-type=REMOVE_SUCCESS --event-type=UPDATE_SUCCESS)
|
||||
|
||||
local audit_trail_events
|
||||
audit_trail_events=$(query_audit_logs json --event-type=EXPORT_AUDIT_LOGS --event-type=GENERATE_COMPLIANCE_REPORT)
|
||||
|
||||
# Generate report content
|
||||
case "$output_format" in
|
||||
"html")
|
||||
generate_sox_html_report "$template_file" "$report_period" "$access_control_events" "$change_management_events" "$audit_trail_events" "$report_file"
|
||||
;;
|
||||
"json")
|
||||
generate_sox_json_report "$template_file" "$report_period" "$access_control_events" "$change_management_events" "$audit_trail_events" "$report_file"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported output format for SOX report: $output_format" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Generate SOX HTML report
|
||||
generate_sox_html_report() {
|
||||
local template_file="$1"
|
||||
local report_period="$2"
|
||||
local access_control_events="$3"
|
||||
local change_management_events="$4"
|
||||
local audit_trail_events="$5"
|
||||
local report_file="$6"
|
||||
|
||||
cat > "$report_file" << 'EOF'
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>SOX Compliance Report - $report_period</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.header { background-color: #f0f0f0; padding: 20px; border-radius: 5px; }
|
||||
.section { margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; }
|
||||
.requirement { margin: 10px 0; padding: 10px; background-color: #f9f9f9; }
|
||||
.compliant { border-left: 5px solid #4CAF50; }
|
||||
.non-compliant { border-left: 5px solid #f44336; }
|
||||
.warning { border-left: 5px solid #ff9800; }
|
||||
table { width: 100%; border-collapse: collapse; margin: 10px 0; }
|
||||
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
|
||||
th { background-color: #f2f2f2; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>SOX Compliance Report</h1>
|
||||
<p><strong>Period:</strong> $report_period</p>
|
||||
<p><strong>Generated:</strong> $(date -u +%Y-%m-%dT%H:%M:%SZ)</p>
|
||||
<p><strong>System:</strong> $(hostname)</p>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>Access Control (Section 404)</h2>
|
||||
<div class="requirement compliant">
|
||||
<h3>User Management</h3>
|
||||
<p>Status: Compliant</p>
|
||||
<p>User management events tracked and logged.</p>
|
||||
</div>
|
||||
<div class="requirement compliant">
|
||||
<h3>Role-Based Access Control</h3>
|
||||
<p>Status: Compliant</p>
|
||||
<p>RBAC implemented with proper permission validation.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>Change Management (Section 404)</h2>
|
||||
<div class="requirement compliant">
|
||||
<h3>Package Installation Tracking</h3>
|
||||
<p>Status: Compliant</p>
|
||||
<p>All package installations are logged and tracked.</p>
|
||||
</div>
|
||||
<div class="requirement compliant">
|
||||
<h3>System Modifications</h3>
|
||||
<p>Status: Compliant</p>
|
||||
<p>System modifications are tracked through audit logs.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>Audit Trail (Section 404)</h2>
|
||||
<div class="requirement compliant">
|
||||
<h3>Comprehensive Logging</h3>
|
||||
<p>Status: Compliant</p>
|
||||
<p>All critical operations are logged with timestamps and user information.</p>
|
||||
</div>
|
||||
<div class="requirement compliant">
|
||||
<h3>Log Integrity</h3>
|
||||
<p>Status: Compliant</p>
|
||||
<p>Audit logs are protected and tamper-evident.</p>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
}
|
||||
|
||||
# Generate SOX JSON report
|
||||
generate_sox_json_report() {
|
||||
local template_file="$1"
|
||||
local report_period="$2"
|
||||
local access_control_events="$3"
|
||||
local change_management_events="$4"
|
||||
local audit_trail_events="$5"
|
||||
local report_file="$6"
|
||||
|
||||
cat > "$report_file" << 'EOF'
|
||||
{
|
||||
"framework": "SOX",
|
||||
"version": "2002",
|
||||
"report_period": "$report_period",
|
||||
"generated_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"system": "$(hostname)",
|
||||
"compliance_status": "compliant",
|
||||
"requirements": {
|
||||
"access_control": {
|
||||
"status": "compliant",
|
||||
"user_management": {
|
||||
"status": "compliant",
|
||||
"description": "User management events tracked and logged"
|
||||
},
|
||||
"role_based_access": {
|
||||
"status": "compliant",
|
||||
"description": "RBAC implemented with proper permission validation"
|
||||
}
|
||||
},
|
||||
"change_management": {
|
||||
"status": "compliant",
|
||||
"package_installation": {
|
||||
"status": "compliant",
|
||||
"description": "All package installations are logged and tracked"
|
||||
},
|
||||
"system_modifications": {
|
||||
"status": "compliant",
|
||||
"description": "System modifications are tracked through audit logs"
|
||||
}
|
||||
},
|
||||
"audit_trail": {
|
||||
"status": "compliant",
|
||||
"comprehensive_logging": {
|
||||
"status": "compliant",
|
||||
"description": "All critical operations are logged with timestamps and user information"
|
||||
},
|
||||
"log_integrity": {
|
||||
"status": "compliant",
|
||||
"description": "Audit logs are protected and tamper-evident"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
# Generate PCI DSS compliance report
|
||||
generate_pci_dss_report() {
|
||||
local template_file="$1"
|
||||
local report_period="$2"
|
||||
local output_format="$3"
|
||||
local report_file="$4"
|
||||
|
||||
# Similar implementation to SOX but with PCI DSS specific requirements
|
||||
log_info "PCI DSS report generation not yet implemented" "apt-layer"
|
||||
return 1
|
||||
}
|
||||
|
||||
# List audit reports
|
||||
list_audit_reports() {
|
||||
log_info "Listing audit reports" "apt-layer"
|
||||
|
||||
echo "=== Audit Reports ==="
|
||||
|
||||
local reports
|
||||
reports=$(find "$AUDIT_REPORTS_DIR" -name "*.html" -o -name "*.json" -o -name "*.csv" 2>/dev/null | sort -r || echo "")
|
||||
|
||||
if [[ -n "$reports" ]]; then
|
||||
for report in $reports; do
|
||||
local report_name
|
||||
report_name=$(basename "$report")
|
||||
local report_size
|
||||
report_size=$(du -h "$report" | cut -f1)
|
||||
local report_date
|
||||
report_date=$(stat -c %y "$report" 2>/dev/null || echo "unknown")
|
||||
|
||||
echo " $report_name ($report_size) - $report_date"
|
||||
done
|
||||
else
|
||||
log_info "No audit reports found" "apt-layer"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Clean up old audit logs
|
||||
cleanup_old_audit_logs() {
|
||||
local max_age_days="${1:-90}"
|
||||
|
||||
log_info "Cleaning up audit logs older than $max_age_days days" "apt-layer"
|
||||
|
||||
local removed_count=0
|
||||
|
||||
# Clean up old log files
|
||||
while IFS= read -r -d '' log_file; do
|
||||
local file_age
|
||||
file_age=$(find "$log_file" -mtime +$max_age_days 2>/dev/null | wc -l)
|
||||
|
||||
if [[ $file_age -gt 0 ]]; then
|
||||
log_info "Removing old audit log: $(basename "$log_file")" "apt-layer"
|
||||
rm -f "$log_file"
|
||||
((removed_count++))
|
||||
fi
|
||||
done < <(find "$AUDIT_LOGS_DIR" -name "*.log*" -print0 2>/dev/null)
|
||||
|
||||
# Clean up old exports
|
||||
while IFS= read -r -d '' export_file; do
|
||||
local file_age
|
||||
file_age=$(find "$export_file" -mtime +$max_age_days 2>/dev/null | wc -l)
|
||||
|
||||
if [[ $file_age -gt 0 ]]; then
|
||||
log_info "Removing old export: $(basename "$export_file")" "apt-layer"
|
||||
rm -f "$export_file"
|
||||
((removed_count++))
|
||||
fi
|
||||
done < <(find "$AUDIT_EXPORTS_DIR" -name "*" -print0 2>/dev/null)
|
||||
|
||||
log_success "Cleaned up $removed_count old audit files" "apt-layer"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Get audit system status
|
||||
get_audit_status() {
|
||||
log_info "Getting audit system status" "apt-layer"
|
||||
|
||||
echo "=== Audit System Status ==="
|
||||
|
||||
# General status
|
||||
echo "General:"
|
||||
echo " Enabled: $AUDIT_ENABLED"
|
||||
echo " Log Level: $AUDIT_LOG_LEVEL"
|
||||
echo " Retention Days: $AUDIT_RETENTION_DAYS"
|
||||
echo " Rotation Size: ${AUDIT_ROTATION_SIZE_MB}MB"
|
||||
|
||||
# Remote shipping status
|
||||
echo ""
|
||||
echo "Remote Shipping:"
|
||||
echo " Enabled: $AUDIT_REMOTE_SHIPPING"
|
||||
echo " Syslog: $AUDIT_SYSLOG_ENABLED"
|
||||
echo " HTTP Endpoint: ${AUDIT_HTTP_ENDPOINT:-not configured}"
|
||||
|
||||
# Log statistics
|
||||
echo ""
|
||||
echo "Log Statistics:"
|
||||
local audit_log="$AUDIT_LOGS_DIR/audit.log"
|
||||
if [[ -f "$audit_log" ]]; then
|
||||
local total_entries
|
||||
total_entries=$(wc -l < "$audit_log" 2>/dev/null || echo "0")
|
||||
echo " Total Entries: $total_entries"
|
||||
|
||||
local recent_entries
|
||||
recent_entries=$(tail -100 "$audit_log" 2>/dev/null | wc -l || echo "0")
|
||||
echo " Recent Entries (last 100): $recent_entries"
|
||||
|
||||
local log_size
|
||||
log_size=$(du -h "$audit_log" | cut -f1 2>/dev/null || echo "unknown")
|
||||
echo " Log Size: $log_size"
|
||||
else
|
||||
echo " Audit log: not available"
|
||||
fi
|
||||
|
||||
# Report statistics
|
||||
echo ""
|
||||
echo "Report Statistics:"
|
||||
local report_count
|
||||
report_count=$(find "$AUDIT_REPORTS_DIR" -name "*.html" -o -name "*.json" -o -name "*.csv" 2>/dev/null | wc -l || echo "0")
|
||||
echo " Total Reports: $report_count"
|
||||
|
||||
local export_count
|
||||
export_count=$(find "$AUDIT_EXPORTS_DIR" -name "*" 2>/dev/null | wc -l || echo "0")
|
||||
echo " Total Exports: $export_count"
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# INTEGRATION FUNCTIONS
|
||||
# =============================================================================
|
||||
|
||||
# Initialize audit reporting on script startup
|
||||
init_audit_reporting_on_startup() {
|
||||
# Only initialize if not already done
|
||||
if [[ ! -d "$AUDIT_STATE_DIR" ]]; then
|
||||
init_audit_reporting
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup audit reporting on script exit
|
||||
cleanup_audit_reporting_on_exit() {
|
||||
# Clean up temporary files
|
||||
rm -f "$AUDIT_QUERIES_DIR"/temp-* 2>/dev/null || true
|
||||
rm -f "$AUDIT_EXPORTS_DIR"/temp-* 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Register cleanup function
|
||||
trap cleanup_audit_reporting_on_exit EXIT
|
||||
|
|
@ -1,878 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Ubuntu uBlue apt-layer Automated Security Scanning
|
||||
# Provides enterprise-grade security scanning, CVE checking, and policy enforcement
|
||||
# for comprehensive security monitoring and vulnerability management
|
||||
|
||||
# =============================================================================
|
||||
# SECURITY SCANNING FUNCTIONS
|
||||
# =============================================================================
|
||||
|
||||
# Security scanning configuration (with fallbacks for when particle-config.sh is not loaded)
|
||||
SECURITY_CONFIG_DIR="${UBLUE_CONFIG_DIR:-/etc/ubuntu-ublue}/security"
|
||||
SECURITY_STATE_DIR="${UBLUE_ROOT:-/var/lib/particle-os}/security"
|
||||
SECURITY_SCANS_DIR="$SECURITY_STATE_DIR/scans"
|
||||
SECURITY_REPORTS_DIR="$SECURITY_STATE_DIR/reports"
|
||||
SECURITY_CACHE_DIR="$SECURITY_STATE_DIR/cache"
|
||||
SECURITY_POLICIES_DIR="$SECURITY_STATE_DIR/policies"
|
||||
SECURITY_CVE_DB_DIR="$SECURITY_STATE_DIR/cve-db"
|
||||
|
||||
# Security configuration
|
||||
SECURITY_ENABLED="${SECURITY_ENABLED:-true}"
|
||||
SECURITY_SCAN_LEVEL="${SECURITY_SCAN_LEVEL:-standard}"
|
||||
SECURITY_AUTO_SCAN="${SECURITY_AUTO_SCAN:-false}"
|
||||
SECURITY_CVE_CHECKING="${SECURITY_CVE_CHECKING:-true}"
|
||||
SECURITY_POLICY_ENFORCEMENT="${SECURITY_POLICY_ENFORCEMENT:-true}"
|
||||
SECURITY_SCAN_INTERVAL_HOURS="${SECURITY_SCAN_INTERVAL_HOURS:-24}"
|
||||
SECURITY_REPORT_RETENTION_DAYS="${SECURITY_REPORT_RETENTION_DAYS:-90}"
|
||||
|
||||
# Initialize security scanning system
|
||||
init_security_scanning() {
|
||||
log_info "Initializing automated security scanning system" "apt-layer"
|
||||
|
||||
# Create security scanning directories
|
||||
mkdir -p "$SECURITY_CONFIG_DIR" "$SECURITY_STATE_DIR" "$SECURITY_SCANS_DIR"
|
||||
mkdir -p "$SECURITY_REPORTS_DIR" "$SECURITY_CACHE_DIR" "$SECURITY_POLICIES_DIR"
|
||||
mkdir -p "$SECURITY_CVE_DB_DIR"
|
||||
|
||||
# Set proper permissions
|
||||
chmod 755 "$SECURITY_CONFIG_DIR" "$SECURITY_STATE_DIR"
|
||||
chmod 750 "$SECURITY_SCANS_DIR" "$SECURITY_REPORTS_DIR" "$SECURITY_CACHE_DIR"
|
||||
chmod 700 "$SECURITY_POLICIES_DIR" "$SECURITY_CVE_DB_DIR"
|
||||
|
||||
# Initialize security configuration
|
||||
init_security_config
|
||||
|
||||
# Initialize CVE database
|
||||
init_cve_database
|
||||
|
||||
# Initialize security policies
|
||||
init_security_policies
|
||||
|
||||
# Initialize scan cache
|
||||
init_scan_cache
|
||||
|
||||
log_success "Automated security scanning system initialized" "apt-layer"
|
||||
}
|
||||
|
||||
# Initialize security configuration
|
||||
init_security_config() {
|
||||
local config_file="$SECURITY_CONFIG_DIR/security-config.json"
|
||||
|
||||
if [[ ! -f "$config_file" ]]; then
|
||||
cat > "$config_file" << EOF
|
||||
{
|
||||
"security": {
|
||||
"enabled": true,
|
||||
"scan_level": "standard",
|
||||
"auto_scan": false,
|
||||
"cve_checking": true,
|
||||
"policy_enforcement": true,
|
||||
"scan_interval_hours": 24,
|
||||
"report_retention_days": 90
|
||||
},
|
||||
"scanning": {
|
||||
"package_scanning": true,
|
||||
"layer_scanning": true,
|
||||
"system_scanning": true,
|
||||
"dependency_scanning": true,
|
||||
"vulnerability_scanning": true
|
||||
},
|
||||
"cve": {
|
||||
"database_url": "https://nvd.nist.gov/vuln/data-feeds",
|
||||
"update_interval_hours": 6,
|
||||
"severity_threshold": "MEDIUM",
|
||||
"auto_update": true
|
||||
},
|
||||
"policies": {
|
||||
"critical_vulnerabilities": "BLOCK",
|
||||
"high_vulnerabilities": "WARN",
|
||||
"medium_vulnerabilities": "LOG",
|
||||
"low_vulnerabilities": "LOG",
|
||||
"unknown_severity": "WARN"
|
||||
},
|
||||
"reporting": {
|
||||
"auto_generate_reports": false,
|
||||
"report_format": "html",
|
||||
"include_recommendations": true,
|
||||
"include_remediation": true
|
||||
}
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$config_file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize CVE database
|
||||
init_cve_database() {
|
||||
local cve_db_file="$SECURITY_CVE_DB_DIR/cve-database.json"
|
||||
|
||||
if [[ ! -f "$cve_db_file" ]]; then
|
||||
cat > "$cve_db_file" << EOF
|
||||
{
|
||||
"metadata": {
|
||||
"version": "1.0",
|
||||
"last_updated": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"source": "NVD",
|
||||
"total_cves": 0
|
||||
},
|
||||
"cves": {},
|
||||
"packages": {},
|
||||
"severity_levels": {
|
||||
"CRITICAL": 4,
|
||||
"HIGH": 3,
|
||||
"MEDIUM": 2,
|
||||
"LOW": 1,
|
||||
"UNKNOWN": 0
|
||||
}
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$cve_db_file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize security policies
|
||||
init_security_policies() {
|
||||
# Default security policy
|
||||
local default_policy="$SECURITY_POLICIES_DIR/default-policy.json"
|
||||
if [[ ! -f "$default_policy" ]]; then
|
||||
cat > "$default_policy" << EOF
|
||||
{
|
||||
"policy_name": "default",
|
||||
"version": "1.0",
|
||||
"description": "Default security policy for Ubuntu uBlue apt-layer",
|
||||
"rules": {
|
||||
"critical_vulnerabilities": {
|
||||
"action": "BLOCK",
|
||||
"description": "Block installation of packages with critical vulnerabilities"
|
||||
},
|
||||
"high_vulnerabilities": {
|
||||
"action": "WARN",
|
||||
"description": "Warn about packages with high vulnerabilities"
|
||||
},
|
||||
"medium_vulnerabilities": {
|
||||
"action": "LOG",
|
||||
"description": "Log packages with medium vulnerabilities"
|
||||
},
|
||||
"low_vulnerabilities": {
|
||||
"action": "LOG",
|
||||
"description": "Log packages with low vulnerabilities"
|
||||
},
|
||||
"unknown_severity": {
|
||||
"action": "WARN",
|
||||
"description": "Warn about packages with unknown vulnerability status"
|
||||
}
|
||||
},
|
||||
"exceptions": [],
|
||||
"enabled": true
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$default_policy"
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize scan cache
|
||||
init_scan_cache() {
|
||||
local cache_file="$SECURITY_CACHE_DIR/scan-cache.json"
|
||||
|
||||
if [[ ! -f "$cache_file" ]]; then
|
||||
cat > "$cache_file" << EOF
|
||||
{
|
||||
"cache_metadata": {
|
||||
"version": "1.0",
|
||||
"created": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"last_cleaned": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
},
|
||||
"package_scans": {},
|
||||
"layer_scans": {},
|
||||
"system_scans": {},
|
||||
"cve_checks": {}
|
||||
}
|
||||
EOF
|
||||
chmod 600 "$cache_file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Scan package for vulnerabilities
|
||||
scan_package() {
|
||||
local package_name="$1"
|
||||
local package_version="${2:-}"
|
||||
local scan_level="${3:-standard}"
|
||||
|
||||
log_info "Scanning package: $package_name" "apt-layer"
|
||||
|
||||
# Check cache first
|
||||
local cache_key="${package_name}_${package_version}_${scan_level}"
|
||||
local cached_result
|
||||
cached_result=$(get_cached_scan_result "package_scans" "$cache_key")
|
||||
|
||||
if [[ -n "$cached_result" ]]; then
|
||||
log_info "Using cached scan result for $package_name" "apt-layer"
|
||||
echo "$cached_result"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Perform package scan
|
||||
local scan_result
|
||||
scan_result=$(perform_package_scan "$package_name" "$package_version" "$scan_level")
|
||||
|
||||
# Cache the result
|
||||
cache_scan_result "package_scans" "$cache_key" "$scan_result"
|
||||
|
||||
# Apply security policy
|
||||
apply_security_policy "$package_name" "$scan_result"
|
||||
|
||||
echo "$scan_result"
|
||||
}
|
||||
|
||||
# Perform package vulnerability scan
|
||||
perform_package_scan() {
|
||||
local package_name="$1"
|
||||
local package_version="$2"
|
||||
local scan_level="$3"
|
||||
|
||||
# Create scan result structure
|
||||
local scan_result
|
||||
scan_result=$(cat << 'EOF'
|
||||
{
|
||||
"package": "$package_name",
|
||||
"version": "$package_version",
|
||||
"scan_level": "$scan_level",
|
||||
"scan_timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"vulnerabilities": [],
|
||||
"security_score": 100,
|
||||
"recommendations": [],
|
||||
"status": "clean"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# Check for known vulnerabilities
|
||||
local vulnerabilities
|
||||
vulnerabilities=$(check_package_vulnerabilities "$package_name" "$package_version")
|
||||
|
||||
if [[ -n "$vulnerabilities" ]]; then
|
||||
# Update scan result with vulnerabilities
|
||||
scan_result=$(echo "$scan_result" | jq --argjson vulns "$vulnerabilities" '.vulnerabilities = $vulns')
|
||||
|
||||
# Calculate security score
|
||||
local security_score
|
||||
security_score=$(calculate_security_score "$vulnerabilities")
|
||||
scan_result=$(echo "$scan_result" | jq --arg score "$security_score" '.security_score = ($score | tonumber)')
|
||||
|
||||
# Update status
|
||||
scan_result=$(echo "$scan_result" | jq '.status = "vulnerable"')
|
||||
|
||||
# Generate recommendations
|
||||
local recommendations
|
||||
recommendations=$(generate_security_recommendations "$vulnerabilities")
|
||||
scan_result=$(echo "$scan_result" | jq --argjson recs "$recommendations" '.recommendations = $recs')
|
||||
fi
|
||||
|
||||
echo "$scan_result"
|
||||
}
|
||||
|
||||
# Check package for known vulnerabilities
|
||||
check_package_vulnerabilities() {
|
||||
local package_name="$1"
|
||||
local package_version="$2"
|
||||
|
||||
local cve_db_file="$SECURITY_CVE_DB_DIR/cve-database.json"
|
||||
|
||||
if [[ ! -f "$cve_db_file" ]]; then
|
||||
log_warning "CVE database not found, skipping vulnerability check" "apt-layer"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Search for package in CVE database
|
||||
local vulnerabilities
|
||||
vulnerabilities=$(jq -r --arg pkg "$package_name" '.packages[$pkg] // []' "$cve_db_file" 2>/dev/null || echo "[]")
|
||||
|
||||
if [[ "$vulnerabilities" == "[]" ]]; then
|
||||
# Try alternative package name formats
|
||||
local alt_names=("${package_name}-dev" "${package_name}-common" "lib${package_name}")
|
||||
|
||||
for alt_name in "${alt_names[@]}"; do
|
||||
local alt_vulns
|
||||
alt_vulns=$(jq -r --arg pkg "$alt_name" '.packages[$pkg] // []' "$cve_db_file" 2>/dev/null || echo "[]")
|
||||
|
||||
if [[ "$alt_vulns" != "[]" ]]; then
|
||||
vulnerabilities="$alt_vulns"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
echo "$vulnerabilities"
|
||||
}
|
||||
|
||||
# Calculate security score based on vulnerabilities
|
||||
calculate_security_score() {
|
||||
local vulnerabilities="$1"
|
||||
|
||||
local score=100
|
||||
local critical_count=0
|
||||
local high_count=0
|
||||
local medium_count=0
|
||||
local low_count=0
|
||||
|
||||
# Count vulnerabilities by severity
|
||||
critical_count=$(echo "$vulnerabilities" | jq -r '[.[] | select(.severity == "CRITICAL")] | length' 2>/dev/null || echo "0")
|
||||
high_count=$(echo "$vulnerabilities" | jq -r '[.[] | select(.severity == "HIGH")] | length' 2>/dev/null || echo "0")
|
||||
medium_count=$(echo "$vulnerabilities" | jq -r '[.[] | select(.severity == "MEDIUM")] | length' 2>/dev/null || echo "0")
|
||||
low_count=$(echo "$vulnerabilities" | jq -r '[.[] | select(.severity == "LOW")] | length' 2>/dev/null || echo "0")
|
||||
|
||||
# Calculate score (critical: -20, high: -10, medium: -5, low: -1)
|
||||
score=$((score - (critical_count * 20) - (high_count * 10) - (medium_count * 5) - low_count))
|
||||
|
||||
# Ensure score doesn't go below 0
|
||||
if [[ $score -lt 0 ]]; then
|
||||
score=0
|
||||
fi
|
||||
|
||||
echo "$score"
|
||||
}
|
||||
|
||||
# Generate security recommendations
|
||||
generate_security_recommendations() {
|
||||
local vulnerabilities="$1"
|
||||
|
||||
local recommendations="[]"
|
||||
|
||||
# Check for critical vulnerabilities
|
||||
local critical_count
|
||||
critical_count=$(echo "$vulnerabilities" | jq -r '[.[] | select(.severity == "CRITICAL")] | length' 2>/dev/null || echo "0")
|
||||
|
||||
if [[ $critical_count -gt 0 ]]; then
|
||||
recommendations=$(echo "$recommendations" | jq '. += ["Do not install packages with critical vulnerabilities"]')
|
||||
fi
|
||||
|
||||
# Check for high vulnerabilities
|
||||
local high_count
|
||||
high_count=$(echo "$vulnerabilities" | jq -r '[.[] | select(.severity == "HIGH")] | length' 2>/dev/null || echo "0")
|
||||
|
||||
if [[ $high_count -gt 0 ]]; then
|
||||
recommendations=$(echo "$recommendations" | jq '. += ["Consider alternative packages or wait for security updates"]')
|
||||
fi
|
||||
|
||||
# Check for outdated packages
|
||||
local outdated_count
|
||||
outdated_count=$(echo "$vulnerabilities" | jq -r '[.[] | select(.type == "outdated")] | length' 2>/dev/null || echo "0")
|
||||
|
||||
if [[ $outdated_count -gt 0 ]]; then
|
||||
recommendations=$(echo "$recommendations" | jq '. += ["Update to latest version when available"]')
|
||||
fi
|
||||
|
||||
echo "$recommendations"
|
||||
}
|
||||
|
||||
# Apply security policy to scan result
|
||||
apply_security_policy() {
|
||||
local package_name="$1"
|
||||
local scan_result="$2"
|
||||
|
||||
local policy_file="$SECURITY_POLICIES_DIR/default-policy.json"
|
||||
|
||||
if [[ ! -f "$policy_file" ]]; then
|
||||
log_warning "Security policy not found, skipping policy enforcement" "apt-layer"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Get highest severity vulnerability
|
||||
local highest_severity
|
||||
highest_severity=$(echo "$scan_result" | jq -r '.vulnerabilities | map(.severity) | sort | reverse | .[0] // "UNKNOWN"' 2>/dev/null || echo "UNKNOWN")
|
||||
|
||||
# Get policy action for this severity
|
||||
local policy_action
|
||||
policy_action=$(jq -r --arg sev "$highest_severity" '.rules[$sev + "_vulnerabilities"].action // "LOG"' "$policy_file" 2>/dev/null || echo "LOG")
|
||||
|
||||
case "$policy_action" in
|
||||
"BLOCK")
|
||||
log_error "Security policy BLOCKED installation of $package_name (severity: $highest_severity)" "apt-layer"
|
||||
log_audit_event "SECURITY_POLICY_BLOCK" "{\"package\": \"$package_name\", \"severity\": \"$highest_severity\", \"policy_action\": \"$policy_action\"}" "WARNING"
|
||||
return 1
|
||||
;;
|
||||
"WARN")
|
||||
log_warning "Security policy WARNING for $package_name (severity: $highest_severity)" "apt-layer"
|
||||
log_audit_event "SECURITY_POLICY_WARN" "{\"package\": \"$package_name\", \"severity\": \"$highest_severity\", \"policy_action\": \"$policy_action\"}" "WARNING"
|
||||
;;
|
||||
"LOG")
|
||||
log_info "Security policy LOGGED $package_name (severity: $highest_severity)" "apt-layer"
|
||||
log_audit_event "SECURITY_POLICY_LOG" "{\"package\": \"$package_name\", \"severity\": \"$highest_severity\", \"policy_action\": \"$policy_action\"}" "INFO"
|
||||
;;
|
||||
*)
|
||||
log_info "Security policy action $policy_action for $package_name (severity: $highest_severity)" "apt-layer"
|
||||
;;
|
||||
esac
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Scan layer for vulnerabilities
|
||||
scan_layer() {
|
||||
local layer_path="$1"
|
||||
local scan_level="${2:-standard}"
|
||||
|
||||
log_info "Scanning layer: $layer_path" "apt-layer"
|
||||
|
||||
# Check cache first
|
||||
local cache_key="${layer_path}_${scan_level}"
|
||||
local cached_result
|
||||
cached_result=$(get_cached_scan_result "layer_scans" "$cache_key")
|
||||
|
||||
if [[ -n "$cached_result" ]]; then
|
||||
log_info "Using cached scan result for layer" "apt-layer"
|
||||
echo "$cached_result"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Extract packages from layer
|
||||
local packages
|
||||
packages=$(extract_packages_from_layer "$layer_path")
|
||||
|
||||
# Scan each package
|
||||
local layer_scan_result
|
||||
layer_scan_result=$(cat << 'EOF'
|
||||
{
|
||||
"layer": "$layer_path",
|
||||
"scan_level": "$scan_level",
|
||||
"scan_timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"packages": [],
|
||||
"total_vulnerabilities": 0,
|
||||
"security_score": 100,
|
||||
"status": "clean"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
local total_vulnerabilities=0
|
||||
local total_score=0
|
||||
local package_count=0
|
||||
|
||||
while IFS= read -r package; do
|
||||
if [[ -n "$package" ]]; then
|
||||
local package_scan
|
||||
package_scan=$(scan_package "$package" "" "$scan_level")
|
||||
|
||||
# Add package to layer scan result
|
||||
layer_scan_result=$(echo "$layer_scan_result" | jq --argjson pkg_scan "$package_scan" '.packages += [$pkg_scan]')
|
||||
|
||||
# Count vulnerabilities
|
||||
local vuln_count
|
||||
vuln_count=$(echo "$package_scan" | jq -r '.vulnerabilities | length' 2>/dev/null || echo "0")
|
||||
total_vulnerabilities=$((total_vulnerabilities + vuln_count))
|
||||
|
||||
# Accumulate score
|
||||
local pkg_score
|
||||
pkg_score=$(echo "$package_scan" | jq -r '.security_score' 2>/dev/null || echo "100")
|
||||
total_score=$((total_score + pkg_score))
|
||||
package_count=$((package_count + 1))
|
||||
fi
|
||||
done <<< "$packages"
|
||||
|
||||
# Calculate average security score
|
||||
if [[ $package_count -gt 0 ]]; then
|
||||
local avg_score=$((total_score / package_count))
|
||||
layer_scan_result=$(echo "$layer_scan_result" | jq --arg score "$avg_score" '.security_score = ($score | tonumber)')
|
||||
fi
|
||||
|
||||
# Update total vulnerabilities
|
||||
layer_scan_result=$(echo "$layer_scan_result" | jq --arg vulns "$total_vulnerabilities" '.total_vulnerabilities = ($vulns | tonumber)')
|
||||
|
||||
# Update status
|
||||
if [[ $total_vulnerabilities -gt 0 ]]; then
|
||||
layer_scan_result=$(echo "$layer_scan_result" | jq '.status = "vulnerable"')
|
||||
fi
|
||||
|
||||
# Cache the result
|
||||
cache_scan_result "layer_scans" "$cache_key" "$layer_scan_result"
|
||||
|
||||
echo "$layer_scan_result"
|
||||
}
|
||||
|
||||
# Extract packages from layer
|
||||
extract_packages_from_layer() {
|
||||
local layer_path="$1"
|
||||
|
||||
# This is a simplified implementation
|
||||
# In a real implementation, you would extract the actual package list from the layer
|
||||
local temp_dir
|
||||
temp_dir=$(mktemp -d)
|
||||
|
||||
# Mount layer and extract package information
|
||||
if mount_layer "$layer_path" "$temp_dir"; then
|
||||
# Extract package list (simplified)
|
||||
local packages
|
||||
packages=$(find "$temp_dir" -name "*.deb" -exec basename {} \; 2>/dev/null | sed 's/_.*$//' || echo "")
|
||||
|
||||
# Cleanup
|
||||
umount_layer "$temp_dir"
|
||||
rmdir "$temp_dir" 2>/dev/null || true
|
||||
|
||||
echo "$packages"
|
||||
else
|
||||
log_warning "Failed to mount layer for package extraction" "apt-layer"
|
||||
echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
# Mount layer for scanning
|
||||
mount_layer() {
|
||||
local layer_path="$1"
|
||||
local mount_point="$2"
|
||||
|
||||
# Simplified mount implementation
|
||||
# In a real implementation, you would use appropriate mounting for the layer format
|
||||
if [[ -f "$layer_path" ]]; then
|
||||
# For squashfs layers
|
||||
mount -t squashfs "$layer_path" "$mount_point" 2>/dev/null || return 1
|
||||
elif [[ -d "$layer_path" ]]; then
|
||||
# For directory layers
|
||||
mount --bind "$layer_path" "$mount_point" 2>/dev/null || return 1
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Unmount layer
|
||||
umount_layer() {
|
||||
local mount_point="$1"
|
||||
|
||||
umount "$mount_point" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Get cached scan result
|
||||
get_cached_scan_result() {
|
||||
local cache_type="$1"
|
||||
local cache_key="$2"
|
||||
|
||||
local cache_file="$SECURITY_CACHE_DIR/scan-cache.json"
|
||||
|
||||
if [[ ! -f "$cache_file" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if cache entry exists and is not expired
|
||||
local cached_result
|
||||
cached_result=$(jq -r --arg type "$cache_type" --arg key "$cache_key" '.[$type][$key] // empty' "$cache_file" 2>/dev/null)
|
||||
|
||||
if [[ -n "$cached_result" ]]; then
|
||||
# Check if cache is still valid (24 hours)
|
||||
local cache_timestamp
|
||||
cache_timestamp=$(echo "$cached_result" | jq -r '.cache_timestamp' 2>/dev/null || echo "")
|
||||
|
||||
if [[ -n "$cache_timestamp" ]]; then
|
||||
local cache_age
|
||||
cache_age=$(($(date +%s) - $(date -d "$cache_timestamp" +%s)))
|
||||
|
||||
if [[ $cache_age -lt 86400 ]]; then # 24 hours
|
||||
echo "$cached_result"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Cache scan result
|
||||
cache_scan_result() {
|
||||
local cache_type="$1"
|
||||
local cache_key="$2"
|
||||
local scan_result="$3"
|
||||
|
||||
local cache_file="$SECURITY_CACHE_DIR/scan-cache.json"
|
||||
|
||||
# Add cache timestamp
|
||||
local cached_result
|
||||
cached_result=$(echo "$scan_result" | jq --arg timestamp "$(date -u +%Y-%m-%dT%H:%M:%SZ)" '.cache_timestamp = $timestamp')
|
||||
|
||||
# Update cache file
|
||||
jq --arg type "$cache_type" --arg key "$cache_key" --argjson result "$cached_result" '.[$type][$key] = $result' "$cache_file" > "$cache_file.tmp" && mv "$cache_file.tmp" "$cache_file" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Update CVE database
|
||||
update_cve_database() {
|
||||
log_info "Updating CVE database" "apt-layer"
|
||||
|
||||
local cve_db_file="$SECURITY_CVE_DB_DIR/cve-database.json"
|
||||
local config_file="$SECURITY_CONFIG_DIR/security-config.json"
|
||||
|
||||
# Get database URL from config
|
||||
local db_url
|
||||
db_url=$(jq -r '.cve.database_url // "https://nvd.nist.gov/vuln/data-feeds"' "$config_file" 2>/dev/null || echo "https://nvd.nist.gov/vuln/data-feeds")
|
||||
|
||||
# Download latest CVE data (simplified implementation)
|
||||
local temp_file
|
||||
temp_file=$(mktemp)
|
||||
|
||||
if curl -s -L "$db_url" > "$temp_file" 2>/dev/null; then
|
||||
# Process and update database (simplified)
|
||||
log_success "CVE database updated successfully" "apt-layer"
|
||||
log_audit_event "CVE_DATABASE_UPDATE" "{\"status\": \"success\", \"source\": \"$db_url\"}" "INFO"
|
||||
else
|
||||
log_error "Failed to update CVE database" "apt-layer"
|
||||
log_audit_event "CVE_DATABASE_UPDATE" "{\"status\": \"failed\", \"source\": \"$db_url\"}" "ERROR"
|
||||
return 1
|
||||
fi
|
||||
|
||||
rm -f "$temp_file"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Generate security report
|
||||
generate_security_report() {
|
||||
local report_type="$1"
|
||||
local output_format="${2:-html}"
|
||||
local scan_level="${3:-standard}"
|
||||
|
||||
log_info "Generating security report: $report_type" "apt-layer"
|
||||
|
||||
local report_file="$SECURITY_REPORTS_DIR/security-report-$(date +%Y%m%d-%H%M%S).$output_format"
|
||||
|
||||
case "$report_type" in
|
||||
"package")
|
||||
generate_package_security_report "$output_format" "$scan_level" "$report_file"
|
||||
;;
|
||||
"layer")
|
||||
generate_layer_security_report "$output_format" "$scan_level" "$report_file"
|
||||
;;
|
||||
"system")
|
||||
generate_system_security_report "$output_format" "$scan_level" "$report_file"
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown report type: $report_type" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
log_success "Security report generated: $report_file" "apt-layer"
|
||||
log_audit_event "GENERATE_SECURITY_REPORT" "{\"type\": \"$report_type\", \"format\": \"$output_format\", \"file\": \"$report_file\"}"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Generate package security report
|
||||
generate_package_security_report() {
|
||||
local output_format="$1"
|
||||
local scan_level="$2"
|
||||
local report_file="$3"
|
||||
|
||||
case "$output_format" in
|
||||
"html")
|
||||
generate_package_html_report "$scan_level" "$report_file"
|
||||
;;
|
||||
"json")
|
||||
generate_package_json_report "$scan_level" "$report_file"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported output format for package report: $output_format" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Generate package HTML report
|
||||
generate_package_html_report() {
|
||||
local scan_level="$1"
|
||||
local report_file="$2"
|
||||
|
||||
cat > "$report_file" << EOF
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Package Security Report - $scan_level</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.header { background-color: #f0f0f0; padding: 20px; border-radius: 5px; }
|
||||
.section { margin: 20px 0; padding: 15px; border: 1px solid #ddd; border-radius: 5px; }
|
||||
.vulnerability { margin: 10px 0; padding: 10px; background-color: #f9f9f9; }
|
||||
.critical { border-left: 5px solid #f44336; }
|
||||
.high { border-left: 5px solid #ff9800; }
|
||||
.medium { border-left: 5px solid #ffc107; }
|
||||
.low { border-left: 5px solid #4CAF50; }
|
||||
table { width: 100%; border-collapse: collapse; margin: 10px 0; }
|
||||
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
|
||||
th { background-color: #f2f2f2; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>Package Security Report</h1>
|
||||
<p><strong>Scan Level:</strong> $scan_level</p>
|
||||
<p><strong>Generated:</strong> $(date -u +%Y-%m-%dT%H:%M:%SZ)</p>
|
||||
<p><strong>System:</strong> $(hostname)</p>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>Security Summary</h2>
|
||||
<p>This report provides a comprehensive security analysis of scanned packages.</p>
|
||||
<p>Scan level: $scan_level</p>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>Recommendations</h2>
|
||||
<ul>
|
||||
<li>Review all critical and high severity vulnerabilities</li>
|
||||
<li>Update packages to latest secure versions</li>
|
||||
<li>Consider alternative packages for persistent vulnerabilities</li>
|
||||
<li>Implement security policies to prevent vulnerable package installation</li>
|
||||
</ul>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
}
|
||||
|
||||
# Generate package JSON report
|
||||
generate_package_json_report() {
|
||||
local scan_level="$1"
|
||||
local report_file="$2"
|
||||
|
||||
cat > "$report_file" << EOF
|
||||
{
|
||||
"report_type": "package_security",
|
||||
"scan_level": "$scan_level",
|
||||
"generated_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"system": "$(hostname)",
|
||||
"summary": {
|
||||
"total_packages_scanned": 0,
|
||||
"vulnerable_packages": 0,
|
||||
"critical_vulnerabilities": 0,
|
||||
"high_vulnerabilities": 0,
|
||||
"medium_vulnerabilities": 0,
|
||||
"low_vulnerabilities": 0
|
||||
},
|
||||
"packages": [],
|
||||
"recommendations": [
|
||||
"Review all critical and high severity vulnerabilities",
|
||||
"Update packages to latest secure versions",
|
||||
"Consider alternative packages for persistent vulnerabilities",
|
||||
"Implement security policies to prevent vulnerable package installation"
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
# Generate layer security report
|
||||
generate_layer_security_report() {
|
||||
local output_format="$1"
|
||||
local scan_level="$2"
|
||||
local report_file="$3"
|
||||
|
||||
# Similar implementation to package report but for layers
|
||||
log_info "Layer security report generation not yet implemented" "apt-layer"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Generate system security report
|
||||
generate_system_security_report() {
|
||||
local output_format="$1"
|
||||
local scan_level="$2"
|
||||
local report_file="$3"
|
||||
|
||||
# Similar implementation to package report but for system-wide analysis
|
||||
log_info "System security report generation not yet implemented" "apt-layer"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Get security scanning status
|
||||
get_security_status() {
|
||||
log_info "Getting security scanning system status" "apt-layer"
|
||||
|
||||
echo "=== Security Scanning System Status ==="
|
||||
|
||||
# General status
|
||||
echo "General:"
|
||||
echo " Enabled: $SECURITY_ENABLED"
|
||||
echo " Scan Level: $SECURITY_SCAN_LEVEL"
|
||||
echo " Auto Scan: $SECURITY_AUTO_SCAN"
|
||||
echo " CVE Checking: $SECURITY_CVE_CHECKING"
|
||||
echo " Policy Enforcement: $SECURITY_POLICY_ENFORCEMENT"
|
||||
|
||||
# CVE database status
|
||||
echo ""
|
||||
echo "CVE Database:"
|
||||
local cve_db_file="$SECURITY_CVE_DB_DIR/cve-database.json"
|
||||
if [[ -f "$cve_db_file" ]]; then
|
||||
local last_updated
|
||||
last_updated=$(jq -r '.metadata.last_updated' "$cve_db_file" 2>/dev/null || echo "unknown")
|
||||
local total_cves
|
||||
total_cves=$(jq -r '.metadata.total_cves' "$cve_db_file" 2>/dev/null || echo "0")
|
||||
echo " Last Updated: $last_updated"
|
||||
echo " Total CVEs: $total_cves"
|
||||
else
|
||||
echo " Status: Not initialized"
|
||||
fi
|
||||
|
||||
# Scan statistics
|
||||
echo ""
|
||||
echo "Scan Statistics:"
|
||||
local cache_file="$SECURITY_CACHE_DIR/scan-cache.json"
|
||||
if [[ -f "$cache_file" ]]; then
|
||||
local package_scans
|
||||
package_scans=$(jq -r '.package_scans | keys | length' "$cache_file" 2>/dev/null || echo "0")
|
||||
local layer_scans
|
||||
layer_scans=$(jq -r '.layer_scans | keys | length' "$cache_file" 2>/dev/null || echo "0")
|
||||
echo " Cached Package Scans: $package_scans"
|
||||
echo " Cached Layer Scans: $layer_scans"
|
||||
else
|
||||
echo " Cache: Not initialized"
|
||||
fi
|
||||
|
||||
# Report statistics
|
||||
echo ""
|
||||
echo "Report Statistics:"
|
||||
local report_count
|
||||
report_count=$(find "$SECURITY_REPORTS_DIR" -name "*.html" -o -name "*.json" 2>/dev/null | wc -l || echo "0")
|
||||
echo " Total Reports: $report_count"
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Clean up old security reports
|
||||
cleanup_old_security_reports() {
|
||||
local max_age_days="${1:-90}"
|
||||
|
||||
log_info "Cleaning up security reports older than $max_age_days days" "apt-layer"
|
||||
|
||||
local removed_count=0
|
||||
|
||||
# Clean up old reports
|
||||
while IFS= read -r report_file; do
|
||||
local file_age
|
||||
file_age=$(find "$report_file" -mtime +$max_age_days 2>/dev/null | wc -l)
|
||||
|
||||
if [[ $file_age -gt 0 ]]; then
|
||||
log_info "Removing old security report: $(basename "$report_file")" "apt-layer"
|
||||
rm -f "$report_file"
|
||||
((removed_count++))
|
||||
fi
|
||||
done < <(find "$SECURITY_REPORTS_DIR" -name "*.html" -o -name "*.json" 2>/dev/null)
|
||||
|
||||
log_success "Cleaned up $removed_count old security reports" "apt-layer"
|
||||
return 0
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# INTEGRATION FUNCTIONS
|
||||
# =============================================================================
|
||||
|
||||
# Initialize security scanning on script startup
|
||||
init_security_scanning_on_startup() {
|
||||
# Only initialize if not already done
|
||||
if [[ ! -d "$SECURITY_STATE_DIR" ]]; then
|
||||
init_security_scanning
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup security scanning on script exit
|
||||
cleanup_security_scanning_on_exit() {
|
||||
# Clean up temporary files
|
||||
rm -f "$SECURITY_CACHE_DIR"/temp-* 2>/dev/null || true
|
||||
rm -f "$SECURITY_SCANS_DIR"/temp-* 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Register cleanup function
|
||||
trap cleanup_security_scanning_on_exit EXIT
|
||||
|
|
@ -1,307 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# 14-admin-utilities.sh - Admin Utilities for Particle-OS apt-layer
|
||||
# Provides system health monitoring, performance analytics, and admin tools
|
||||
|
||||
# --- Color and Symbols ---
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
CHECK="â"
|
||||
WARN="â ï¸ "
|
||||
CROSS="â"
|
||||
INFO="â¹ï¸ "
|
||||
|
||||
# --- Helper: Check for WSL ---
|
||||
is_wsl() {
|
||||
grep -qi microsoft /proc/version 2>/dev/null
|
||||
}
|
||||
|
||||
get_wsl_version() {
|
||||
if is_wsl; then
|
||||
if grep -q WSL2 /proc/version 2>/dev/null; then
|
||||
echo "WSL2"
|
||||
else
|
||||
echo "WSL1"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# --- System Health Monitoring ---
|
||||
health_check() {
|
||||
local health_status=0
|
||||
echo -e "${CYAN}================= System Health Check =================${NC}"
|
||||
echo -e "${INFO} Hostname: $(hostname 2>/dev/null || echo N/A)"
|
||||
echo -e "${INFO} Uptime: $(uptime -p 2>/dev/null || echo N/A)"
|
||||
echo -e "${INFO} Kernel: $(uname -r 2>/dev/null || echo N/A)"
|
||||
if is_wsl; then
|
||||
echo -e "${INFO} WSL: $(get_wsl_version)"
|
||||
fi
|
||||
echo -e "${INFO} Load Avg: $(awk '{print $1, $2, $3}' /proc/loadavg 2>/dev/null || echo N/A)"
|
||||
# CPU Info
|
||||
if command -v lscpu &>/dev/null; then
|
||||
cpu_model=$(lscpu | grep 'Model name' | awk -F: '{print $2}' | xargs)
|
||||
cpu_cores=$(lscpu | grep '^CPU(s):' | awk '{print $2}')
|
||||
echo -e "${INFO} CPU: $cpu_model ($cpu_cores cores)"
|
||||
else
|
||||
echo -e "${WARN} CPU: lscpu not available"
|
||||
health_status=1
|
||||
fi
|
||||
# Memory
|
||||
if command -v free &>/dev/null; then
|
||||
mem_line=$(free -m | grep Mem)
|
||||
mem_total=$(echo $mem_line | awk '{print $2}')
|
||||
mem_used=$(echo $mem_line | awk '{print $3}')
|
||||
mem_free=$(echo $mem_line | awk '{print $4}')
|
||||
mem_perc=$((100 * mem_used / mem_total))
|
||||
echo -e "${INFO} Memory: ${mem_total}MiB total, ${mem_used}MiB used (${mem_perc}%)"
|
||||
else
|
||||
echo -e "${WARN} Memory: free not available"
|
||||
health_status=1
|
||||
fi
|
||||
# Disk
|
||||
if command -v df &>/dev/null; then
|
||||
disk_root=$(df -h / | tail -1)
|
||||
disk_total=$(echo $disk_root | awk '{print $2}')
|
||||
disk_used=$(echo $disk_root | awk '{print $3}')
|
||||
disk_avail=$(echo $disk_root | awk '{print $4}')
|
||||
disk_perc=$(echo $disk_root | awk '{print $5}')
|
||||
echo -e "${INFO} Disk /: $disk_total total, $disk_used used, $disk_avail free ($disk_perc)"
|
||||
if [ -d /var/lib/particle-os ]; then
|
||||
disk_ublue=$(df -h /var/lib/particle-os 2>/dev/null | tail -1)
|
||||
if [ -n "$disk_ublue" ]; then
|
||||
ublue_total=$(echo $disk_ublue | awk '{print $2}')
|
||||
ublue_used=$(echo $disk_ublue | awk '{print $3}')
|
||||
ublue_avail=$(echo $disk_ublue | awk '{print $4}')
|
||||
ublue_perc=$(echo $disk_ublue | awk '{print $5}')
|
||||
echo -e "${INFO} Disk /var/lib/particle-os: $ublue_total total, $ublue_used used, $ublue_avail free ($ublue_perc)"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo -e "${WARN} Disk: df not available"
|
||||
health_status=1
|
||||
fi
|
||||
# OverlayFS/ComposeFS
|
||||
overlays=$(mount | grep overlay | wc -l)
|
||||
composefs=$(mount | grep composefs | wc -l)
|
||||
echo -e "${INFO} OverlayFS: $overlays overlays mounted"
|
||||
echo -e "${INFO} ComposeFS: $composefs composefs mounted"
|
||||
# Bootloader
|
||||
if command -v bootctl &>/dev/null; then
|
||||
boot_status=$(bootctl status 2>/dev/null | grep 'System:' | xargs)
|
||||
echo -e "${INFO} Bootloader: ${boot_status:-N/A}"
|
||||
else
|
||||
echo -e "${WARN} Bootloader: bootctl not available"
|
||||
fi
|
||||
# Security
|
||||
if command -v apparmor_status &>/dev/null; then
|
||||
sec_status=$(apparmor_status | grep 'profiles are in enforce mode' || echo 'N/A')
|
||||
echo -e "${INFO} Security: $sec_status"
|
||||
else
|
||||
echo -e "${WARN} Security: apparmor_status not available"
|
||||
fi
|
||||
# Layer Integrity/Deployment
|
||||
echo -e "${CYAN}-----------------------------------------------------${NC}"
|
||||
echo -e "${INFO} Layer Integrity: [Coming soon] (future: check layer hashes)"
|
||||
echo -e "${INFO} Deployment Status: [Coming soon] (future: show active deployments)"
|
||||
# Top processes
|
||||
echo -e "${CYAN}---------------- Top 3 Processes ---------------------${NC}"
|
||||
if command -v ps &>/dev/null; then
|
||||
echo -e "${INFO} By CPU:"
|
||||
ps -eo pid,comm,%cpu --sort=-%cpu | head -n 4 | tail -n 3 | awk '{printf " PID: %-6s %-20s CPU: %s%%\n", $1, $2, $3}'
|
||||
echo -e "${INFO} By MEM:"
|
||||
ps -eo pid,comm,%mem --sort=-%mem | head -n 4 | tail -n 3 | awk '{printf " PID: %-6s %-20s MEM: %s%%\n", $1, $2, $3}'
|
||||
else
|
||||
echo -e "${WARN} ps not available for process listing"
|
||||
fi
|
||||
echo -e "${CYAN}-----------------------------------------------------${NC}"
|
||||
# Summary
|
||||
if [ $health_status -eq 0 ]; then
|
||||
echo -e "${GREEN}${CHECK} System health: OK${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}${WARN} System health: WARNING (see above)${NC}"
|
||||
fi
|
||||
echo -e "${CYAN}=====================================================${NC}"
|
||||
}
|
||||
|
||||
# --- Performance Analytics ---
|
||||
performance_report() {
|
||||
echo -e "${CYAN}=============== Performance Analytics ===============${NC}"
|
||||
echo -e "${INFO} Layer creation time (last 5): [Coming soon] (future: show timing logs)"
|
||||
echo -e "${INFO} Resource usage (CPU/mem): [Coming soon] (future: show resource stats)"
|
||||
if command -v iostat &>/dev/null; then
|
||||
echo -e "${INFO} Disk I/O stats:"
|
||||
iostat | grep -A1 Device | tail -n +2
|
||||
else
|
||||
echo -e "${WARN} Disk I/O stats: iostat not available"
|
||||
fi
|
||||
echo -e "${INFO} Historical trends: [Coming soon] (future: show trends if data available)"
|
||||
echo -e "${CYAN}=====================================================${NC}"
|
||||
}
|
||||
|
||||
# --- Automated Maintenance ---
|
||||
admin_cleanup() {
|
||||
# Defaults
|
||||
local days=30
|
||||
local dry_run=false
|
||||
local keep_recent=2
|
||||
local DEPLOYMENTS_DIR="/var/lib/particle-os/deployments"
|
||||
local LOGS_DIR="/var/log/apt-layer"
|
||||
local BACKUPS_DIR="/var/lib/particle-os/backups"
|
||||
|
||||
# Load config from JSON if available
|
||||
local config_file="$(dirname "${BASH_SOURCE[0]}")/../config/maintenance.json"
|
||||
if [ -f "$config_file" ] && command -v jq &>/dev/null; then
|
||||
days=$(jq -r '.retention_days // 30' "$config_file")
|
||||
keep_recent=$(jq -r '.keep_recent // 2' "$config_file")
|
||||
DEPLOYMENTS_DIR=$(jq -r '.deployments_dir // "/var/lib/particle-os/deployments"' "$config_file")
|
||||
LOGS_DIR=$(jq -r '.logs_dir // "/var/log/apt-layer"' "$config_file")
|
||||
BACKUPS_DIR=$(jq -r '.backups_dir // "/var/lib/particle-os/backups"' "$config_file")
|
||||
fi
|
||||
|
||||
# Parse arguments (override config)
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--days|-d)
|
||||
days="$2"; shift 2;;
|
||||
--dry-run)
|
||||
dry_run=true; shift;;
|
||||
--keep-recent)
|
||||
keep_recent="$2"; shift 2;;
|
||||
--deployments-dir)
|
||||
DEPLOYMENTS_DIR="$2"; shift 2;;
|
||||
--logs-dir)
|
||||
LOGS_DIR="$2"; shift 2;;
|
||||
--backups-dir)
|
||||
BACKUPS_DIR="$2"; shift 2;;
|
||||
--schedule)
|
||||
echo -e "${YELLOW}${WARN} Scheduled cleanup: Not yet implemented (will use systemd/cron)${NC}"; return;;
|
||||
*)
|
||||
shift;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo -e "${CYAN}--- Automated Maintenance Cleanup ---${NC}"
|
||||
echo -e "${INFO} Retention: $days days"
|
||||
echo -e "${INFO} Keep recent: $keep_recent items"
|
||||
echo -e "${INFO} Deployments dir: $DEPLOYMENTS_DIR"
|
||||
echo -e "${INFO} Logs dir: $LOGS_DIR"
|
||||
echo -e "${INFO} Backups dir: $BACKUPS_DIR"
|
||||
if [ "$dry_run" = true ]; then
|
||||
echo -e "${YELLOW}${WARN} DRY RUN MODE - No files will be deleted${NC}"
|
||||
fi
|
||||
|
||||
local total_deleted=0
|
||||
|
||||
# Helper function to cleanup directory
|
||||
cleanup_directory() {
|
||||
local dir="$1"
|
||||
local description="$2"
|
||||
local deleted_count=0
|
||||
|
||||
if [ ! -d "$dir" ]; then
|
||||
echo -e "${INFO} $description: Directory does not exist, skipping"
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${INFO} $description: Scanning $dir"
|
||||
|
||||
# Get list of files/directories older than retention period
|
||||
local old_items=()
|
||||
if command -v find &>/dev/null; then
|
||||
while IFS= read -r -d '' item; do
|
||||
old_items+=("$item")
|
||||
done < <(find "$dir" -maxdepth 1 -type f -o -type d -mtime +$days -print0 2>/dev/null)
|
||||
fi
|
||||
|
||||
# Remove the most recent items from deletion list
|
||||
if [ ${#old_items[@]} -gt 0 ] && [ $keep_recent -gt 0 ]; then
|
||||
# Sort by modification time (newest first) and keep the most recent
|
||||
local sorted_items=($(printf '%s\n' "${old_items[@]}" | xargs -I {} stat -c '%Y %n' {} 2>/dev/null | sort -nr | tail -n +$((keep_recent + 1)) | awk '{print $2}'))
|
||||
old_items=("${sorted_items[@]}")
|
||||
fi
|
||||
|
||||
if [ ${#old_items[@]} -eq 0 ]; then
|
||||
echo -e "${INFO} $description: No items to delete"
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${INFO} $description: Found ${#old_items[@]} items to delete"
|
||||
|
||||
for item in "${old_items[@]}"; do
|
||||
if [ "$dry_run" = true ]; then
|
||||
echo -e " ${YELLOW}Would delete: $item${NC}"
|
||||
else
|
||||
if rm -rf "$item" 2>/dev/null; then
|
||||
echo -e " ${GREEN}Deleted: $item${NC}"
|
||||
((deleted_count++))
|
||||
else
|
||||
echo -e " ${RED}Failed to delete: $item${NC}"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$dry_run" = false ]; then
|
||||
total_deleted=$((total_deleted + deleted_count))
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup each directory
|
||||
cleanup_directory "$DEPLOYMENTS_DIR" "Deployments"
|
||||
cleanup_directory "$LOGS_DIR" "Logs"
|
||||
cleanup_directory "$BACKUPS_DIR" "Backups"
|
||||
|
||||
# Summary
|
||||
if [ "$dry_run" = true ]; then
|
||||
echo -e "${YELLOW}${WARN} Dry run completed - no files were deleted${NC}"
|
||||
else
|
||||
echo -e "${GREEN}${CHECK} Cleanup complete - $total_deleted items deleted${NC}"
|
||||
fi
|
||||
echo -e "${CYAN}-------------------------------------${NC}"
|
||||
}
|
||||
|
||||
# --- Backup/Restore (Stub) ---
|
||||
admin_backup() {
|
||||
echo -e "${YELLOW}${WARN} Backup: Not yet implemented${NC}"
|
||||
}
|
||||
|
||||
admin_restore() {
|
||||
echo -e "${YELLOW}${WARN} Restore: Not yet implemented${NC}"
|
||||
}
|
||||
|
||||
# --- Command Dispatch ---
|
||||
admin_utilities_main() {
|
||||
case "${1:-}" in
|
||||
health|health-check)
|
||||
health_check
|
||||
;;
|
||||
perf|performance|analytics)
|
||||
performance_report
|
||||
;;
|
||||
cleanup)
|
||||
shift
|
||||
admin_cleanup "$@"
|
||||
;;
|
||||
backup)
|
||||
admin_backup
|
||||
;;
|
||||
restore)
|
||||
admin_restore
|
||||
;;
|
||||
help|--help|-h|"")
|
||||
echo -e "${CYAN}Admin Utilities Commands:${NC}"
|
||||
echo -e " ${GREEN}health${NC} - System health check"
|
||||
echo -e " ${GREEN}perf${NC} - Performance analytics"
|
||||
echo -e " ${GREEN}cleanup${NC} - Maintenance cleanup (--days N, --dry-run, --keep-recent N)"
|
||||
echo -e " ${GREEN}backup${NC} - Backup configs/layers (stub)"
|
||||
echo -e " ${GREEN}restore${NC} - Restore from backup (stub)"
|
||||
echo -e " ${GREEN}help${NC} - Show this help message"
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}${CROSS} Unknown admin command: $1${NC}"
|
||||
admin_utilities_main help
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
|
@ -1,641 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Multi-Tenant Support for apt-layer
|
||||
# Enables enterprise deployments with multiple organizations, departments, or environments
|
||||
# Provides tenant isolation, resource quotas, and cross-tenant management
|
||||
|
||||
# Multi-tenant configuration
|
||||
MULTI_TENANT_ENABLED="${MULTI_TENANT_ENABLED:-false}"
|
||||
TENANT_ISOLATION_LEVEL="${TENANT_ISOLATION_LEVEL:-strict}" # strict, moderate, permissive
|
||||
TENANT_RESOURCE_QUOTAS="${TENANT_RESOURCE_QUOTAS:-true}"
|
||||
TENANT_CROSS_ACCESS="${TENANT_CROSS_ACCESS:-false}"
|
||||
|
||||
# Tenant management functions
|
||||
init_multi_tenant_system() {
|
||||
log_info "Initializing multi-tenant system..." "multi-tenant"
|
||||
|
||||
# Create tenant directories
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
mkdir -p "$tenant_base"
|
||||
mkdir -p "$tenant_base/shared"
|
||||
mkdir -p "$tenant_base/templates"
|
||||
|
||||
# Initialize tenant database
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
if [[ ! -f "$tenant_db" ]]; then
|
||||
cat > "$tenant_db" << 'EOF'
|
||||
{
|
||||
"tenants": [],
|
||||
"policies": {
|
||||
"default_isolation": "strict",
|
||||
"default_quotas": {
|
||||
"max_layers": 100,
|
||||
"max_storage_gb": 50,
|
||||
"max_users": 10
|
||||
},
|
||||
"cross_tenant_access": false
|
||||
},
|
||||
"metadata": {
|
||||
"created": "",
|
||||
"version": "1.0"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
# Set creation timestamp
|
||||
jq --arg created "$(date -Iseconds)" '.metadata.created = $created' "$tenant_db" > "$tenant_db.tmp" && mv "$tenant_db.tmp" "$tenant_db"
|
||||
fi
|
||||
|
||||
log_success "Multi-tenant system initialized" "multi-tenant"
|
||||
}
|
||||
|
||||
# Tenant creation and management
|
||||
create_tenant() {
|
||||
local tenant_name="$1"
|
||||
local tenant_config="$2"
|
||||
|
||||
if [[ -z "$tenant_name" ]]; then
|
||||
log_error "Tenant name is required" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate tenant name
|
||||
if [[ ! "$tenant_name" =~ ^[a-zA-Z0-9_-]+$ ]]; then
|
||||
log_error "Invalid tenant name: $tenant_name (use alphanumeric, underscore, hyphen only)" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
local tenant_dir="$tenant_base/$tenant_name"
|
||||
|
||||
# Check if tenant already exists
|
||||
if jq -e ".tenants[] | select(.name == \"$tenant_name\")" "$tenant_db" > /dev/null 2>&1; then
|
||||
log_error "Tenant '$tenant_name' already exists" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Create tenant directory structure
|
||||
mkdir -p "$tenant_dir"
|
||||
mkdir -p "$tenant_dir/layers"
|
||||
mkdir -p "$tenant_dir/deployments"
|
||||
mkdir -p "$tenant_dir/users"
|
||||
mkdir -p "$tenant_dir/audit"
|
||||
mkdir -p "$tenant_dir/backups"
|
||||
mkdir -p "$tenant_dir/config"
|
||||
|
||||
# Create tenant configuration
|
||||
local tenant_config_file="$tenant_dir/config/tenant.json"
|
||||
cat > "$tenant_config_file" << EOF
|
||||
{
|
||||
"name": "$tenant_name",
|
||||
"created": "$(date -Iseconds)",
|
||||
"status": "active",
|
||||
"isolation_level": "$TENANT_ISOLATION_LEVEL",
|
||||
"quotas": {
|
||||
"max_layers": 100,
|
||||
"max_storage_gb": 50,
|
||||
"max_users": 10,
|
||||
"used_layers": 0,
|
||||
"used_storage_gb": 0,
|
||||
"used_users": 0
|
||||
},
|
||||
"policies": {
|
||||
"allowed_packages": [],
|
||||
"blocked_packages": [],
|
||||
"security_level": "standard",
|
||||
"audit_retention_days": 90
|
||||
},
|
||||
"integrations": {
|
||||
"oci_registries": [],
|
||||
"external_audit": null,
|
||||
"monitoring": null
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Merge custom configuration if provided
|
||||
if [[ -n "$tenant_config" && -f "$tenant_config" ]]; then
|
||||
if jq empty "$tenant_config" 2>/dev/null; then
|
||||
jq -s '.[0] * .[1]' "$tenant_config_file" "$tenant_config" > "$tenant_config_file.tmp" && mv "$tenant_config_file.tmp" "$tenant_config_file"
|
||||
else
|
||||
log_warning "Invalid JSON in tenant configuration, using defaults" "multi-tenant"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Add tenant to database
|
||||
local tenant_info
|
||||
tenant_info=$(jq -r '.' "$tenant_config_file")
|
||||
jq --arg name "$tenant_name" --argjson info "$tenant_info" '.tenants += [$info]' "$tenant_db" > "$tenant_db.tmp" && mv "$tenant_db.tmp" "$tenant_db"
|
||||
|
||||
log_success "Tenant '$tenant_name' created successfully" "multi-tenant"
|
||||
log_info "Tenant directory: $tenant_dir" "multi-tenant"
|
||||
}
|
||||
|
||||
# Tenant deletion
|
||||
delete_tenant() {
|
||||
local tenant_name="$1"
|
||||
local force="${2:-false}"
|
||||
|
||||
if [[ -z "$tenant_name" ]]; then
|
||||
log_error "Tenant name is required" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
local tenant_dir="$tenant_base/$tenant_name"
|
||||
|
||||
# Check if tenant exists
|
||||
if ! jq -e ".tenants[] | select(.name == \"$tenant_name\")" "$tenant_db" > /dev/null 2>&1; then
|
||||
log_error "Tenant '$tenant_name' does not exist" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check for active resources
|
||||
local active_layers=0
|
||||
local active_deployments=0
|
||||
|
||||
if [[ -d "$tenant_dir/layers" ]]; then
|
||||
active_layers=$(find "$tenant_dir/layers" -name "*.squashfs" 2>/dev/null | wc -l)
|
||||
fi
|
||||
|
||||
if [[ -d "$tenant_dir/deployments" ]]; then
|
||||
active_deployments=$(find "$tenant_dir/deployments" -name "*.json" 2>/dev/null | wc -l)
|
||||
fi
|
||||
|
||||
if [[ $active_layers -gt 0 || $active_deployments -gt 0 ]]; then
|
||||
if [[ "$force" != "true" ]]; then
|
||||
log_error "Tenant '$tenant_name' has active resources ($active_layers layers, $active_deployments deployments)" "multi-tenant"
|
||||
log_error "Use --force to delete anyway" "multi-tenant"
|
||||
return 1
|
||||
else
|
||||
log_warning "Force deleting tenant with active resources" "multi-tenant"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Remove from database
|
||||
jq --arg name "$tenant_name" 'del(.tenants[] | select(.name == $name))' "$tenant_db" > "$tenant_db.tmp" && mv "$tenant_db.tmp" "$tenant_db"
|
||||
|
||||
# Remove tenant directory
|
||||
if [[ -d "$tenant_dir" ]]; then
|
||||
rm -rf "$tenant_dir"
|
||||
fi
|
||||
|
||||
log_success "Tenant '$tenant_name' deleted successfully" "multi-tenant"
|
||||
}
|
||||
|
||||
# Tenant listing and information
|
||||
list_tenants() {
|
||||
local format="${1:-table}"
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
|
||||
if [[ ! -f "$tenant_db" ]]; then
|
||||
log_error "Tenant database not found" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"json")
|
||||
jq -r '.' "$tenant_db"
|
||||
;;
|
||||
"csv")
|
||||
echo "name,status,created,layers,storage_gb,users"
|
||||
jq -r '.tenants[] | [.name, .status, .created, .quotas.used_layers, .quotas.used_storage_gb, .quotas.used_users] | @csv' "$tenant_db"
|
||||
;;
|
||||
"table"|*)
|
||||
echo "Tenants:"
|
||||
echo "========"
|
||||
jq -r '.tenants[] | "\(.name) (\(.status)) - Layers: \(.quotas.used_layers)/\(.quotas.max_layers), Storage: \(.quotas.used_storage_gb)GB/\(.quotas.max_storage_gb)GB"' "$tenant_db"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Tenant information
|
||||
get_tenant_info() {
|
||||
local tenant_name="$1"
|
||||
local format="${2:-json}"
|
||||
|
||||
if [[ -z "$tenant_name" ]]; then
|
||||
log_error "Tenant name is required" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
|
||||
local tenant_info
|
||||
tenant_info=$(jq -r ".tenants[] | select(.name == \"$tenant_name\")" "$tenant_db" 2>/dev/null)
|
||||
|
||||
if [[ -z "$tenant_info" ]]; then
|
||||
log_error "Tenant '$tenant_name' not found" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"json")
|
||||
echo "$tenant_info"
|
||||
;;
|
||||
"yaml")
|
||||
echo "$tenant_info" | jq -r '.' | sed 's/^/ /'
|
||||
;;
|
||||
"summary")
|
||||
local name status created layers storage users
|
||||
name=$(echo "$tenant_info" | jq -r '.name')
|
||||
status=$(echo "$tenant_info" | jq -r '.status')
|
||||
created=$(echo "$tenant_info" | jq -r '.created')
|
||||
layers=$(echo "$tenant_info" | jq -r '.quotas.used_layers')
|
||||
storage=$(echo "$tenant_info" | jq -r '.quotas.used_storage_gb')
|
||||
users=$(echo "$tenant_info" | jq -r '.quotas.used_users')
|
||||
|
||||
echo "Tenant: $name"
|
||||
echo "Status: $status"
|
||||
echo "Created: $created"
|
||||
echo "Resources: $layers layers, ${storage}GB storage, $users users"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Tenant quota management
|
||||
update_tenant_quotas() {
|
||||
local tenant_name="$1"
|
||||
local quota_type="$2"
|
||||
local value="$3"
|
||||
|
||||
if [[ -z "$tenant_name" || -z "$quota_type" || -z "$value" ]]; then
|
||||
log_error "Usage: update_tenant_quotas <tenant> <quota_type> <value>" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
|
||||
# Validate quota type
|
||||
case "$quota_type" in
|
||||
"max_layers"|"max_storage_gb"|"max_users")
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid quota type: $quota_type" "multi-tenant"
|
||||
log_error "Valid types: max_layers, max_storage_gb, max_users" "multi-tenant"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Update quota
|
||||
jq --arg name "$tenant_name" --arg type "$quota_type" --arg value "$value" \
|
||||
'.tenants[] | select(.name == $name) | .quotas[$type] = ($value | tonumber)' "$tenant_db" > "$tenant_db.tmp" && mv "$tenant_db.tmp" "$tenant_db"
|
||||
|
||||
log_success "Updated quota for tenant '$tenant_name': $quota_type = $value" "multi-tenant"
|
||||
}
|
||||
|
||||
# Tenant isolation and access control
|
||||
check_tenant_access() {
|
||||
local tenant_name="$1"
|
||||
local user="$2"
|
||||
local operation="$3"
|
||||
|
||||
if [[ -z "$tenant_name" || -z "$user" || -z "$operation" ]]; then
|
||||
log_error "Usage: check_tenant_access <tenant> <user> <operation>" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
|
||||
# Check if tenant exists
|
||||
if ! jq -e ".tenants[] | select(.name == \"$tenant_name\")" "$tenant_db" > /dev/null 2>&1; then
|
||||
log_error "Tenant '$tenant_name' not found" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get tenant isolation level
|
||||
local isolation_level
|
||||
isolation_level=$(jq -r ".tenants[] | select(.name == \"$tenant_name\") | .isolation_level" "$tenant_db")
|
||||
|
||||
# Check user access (simplified - in real implementation, this would check user roles)
|
||||
local user_file="$tenant_base/$tenant_name/users/$user.json"
|
||||
if [[ ! -f "$user_file" ]]; then
|
||||
log_error "User '$user' not found in tenant '$tenant_name'" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check operation permissions
|
||||
local user_role
|
||||
user_role=$(jq -r '.role' "$user_file" 2>/dev/null)
|
||||
|
||||
case "$operation" in
|
||||
"read")
|
||||
[[ "$user_role" =~ ^(admin|package_manager|viewer)$ ]] && return 0
|
||||
;;
|
||||
"write")
|
||||
[[ "$user_role" =~ ^(admin|package_manager)$ ]] && return 0
|
||||
;;
|
||||
"admin")
|
||||
[[ "$user_role" == "admin" ]] && return 0
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown operation: $operation" "multi-tenant"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
log_error "Access denied: User '$user' with role '$user_role' cannot perform '$operation' operation" "multi-tenant"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Tenant resource usage tracking
|
||||
update_tenant_usage() {
|
||||
local tenant_name="$1"
|
||||
local resource_type="$2"
|
||||
local amount="$3"
|
||||
|
||||
if [[ -z "$tenant_name" || -z "$resource_type" || -z "$amount" ]]; then
|
||||
log_error "Usage: update_tenant_usage <tenant> <resource_type> <amount>" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
|
||||
# Update usage
|
||||
jq --arg name "$tenant_name" --arg type "$resource_type" --arg amount "$amount" \
|
||||
'.tenants[] | select(.name == $name) | .quotas["used_" + $type] = (.quotas["used_" + $type] + ($amount | tonumber))' "$tenant_db" > "$tenant_db.tmp" && mv "$tenant_db.tmp" "$tenant_db"
|
||||
|
||||
log_debug "Updated usage for tenant '$tenant_name': $resource_type += $amount" "multi-tenant"
|
||||
}
|
||||
|
||||
# Tenant quota enforcement
|
||||
enforce_tenant_quotas() {
|
||||
local tenant_name="$1"
|
||||
local resource_type="$2"
|
||||
local requested_amount="$3"
|
||||
|
||||
if [[ -z "$tenant_name" || -z "$resource_type" || -z "$requested_amount" ]]; then
|
||||
log_error "Usage: enforce_tenant_quotas <tenant> <resource_type> <amount>" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
|
||||
# Get current usage and quota
|
||||
local current_usage max_quota
|
||||
current_usage=$(jq -r ".tenants[] | select(.name == \"$tenant_name\") | .quotas.used_$resource_type" "$tenant_db")
|
||||
max_quota=$(jq -r ".tenants[] | select(.name == \"$tenant_name\") | .quotas.max_$resource_type" "$tenant_db")
|
||||
|
||||
# Check if request would exceed quota
|
||||
local new_total=$((current_usage + requested_amount))
|
||||
if [[ $new_total -gt $max_quota ]]; then
|
||||
log_error "Quota exceeded for tenant '$tenant_name': $resource_type" "multi-tenant"
|
||||
log_error "Current: $current_usage, Requested: $requested_amount, Max: $max_quota" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Cross-tenant operations (when enabled)
|
||||
cross_tenant_operation() {
|
||||
local source_tenant="$1"
|
||||
local target_tenant="$2"
|
||||
local operation="$3"
|
||||
local user="$4"
|
||||
|
||||
if [[ "$TENANT_CROSS_ACCESS" != "true" ]]; then
|
||||
log_error "Cross-tenant operations are disabled" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ -z "$source_tenant" || -z "$target_tenant" || -z "$operation" || -z "$user" ]]; then
|
||||
log_error "Usage: cross_tenant_operation <source> <target> <operation> <user>" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check user has admin access to both tenants
|
||||
if ! check_tenant_access "$source_tenant" "$user" "admin"; then
|
||||
log_error "User '$user' lacks admin access to source tenant '$source_tenant'" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! check_tenant_access "$target_tenant" "$user" "admin"; then
|
||||
log_error "User '$user' lacks admin access to target tenant '$target_tenant'" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Cross-tenant operation: $operation from '$source_tenant' to '$target_tenant' by '$user'" "multi-tenant"
|
||||
|
||||
# Implement specific cross-tenant operations here
|
||||
case "$operation" in
|
||||
"copy_layer")
|
||||
# Copy layer from source to target tenant
|
||||
log_info "Copying layer between tenants..." "multi-tenant"
|
||||
;;
|
||||
"sync_config")
|
||||
# Sync configuration between tenants
|
||||
log_info "Syncing configuration between tenants..." "multi-tenant"
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown cross-tenant operation: $operation" "multi-tenant"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Tenant backup and restore
|
||||
backup_tenant() {
|
||||
local tenant_name="$1"
|
||||
local backup_path="$2"
|
||||
|
||||
if [[ -z "$tenant_name" ]]; then
|
||||
log_error "Tenant name is required" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_dir="$tenant_base/$tenant_name"
|
||||
|
||||
if [[ ! -d "$tenant_dir" ]]; then
|
||||
log_error "Tenant directory not found: $tenant_dir" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Create backup
|
||||
local backup_file
|
||||
if [[ -n "$backup_path" ]]; then
|
||||
backup_file="$backup_path"
|
||||
else
|
||||
backup_file="$tenant_dir/backups/tenant-${tenant_name}-$(date +%Y%m%d-%H%M%S).tar.gz"
|
||||
fi
|
||||
|
||||
mkdir -p "$(dirname "$backup_file")"
|
||||
|
||||
tar -czf "$backup_file" -C "$tenant_base" "$tenant_name"
|
||||
|
||||
log_success "Tenant '$tenant_name' backed up to: $backup_file" "multi-tenant"
|
||||
}
|
||||
|
||||
restore_tenant() {
|
||||
local backup_file="$1"
|
||||
local tenant_name="$2"
|
||||
|
||||
if [[ -z "$backup_file" || -z "$tenant_name" ]]; then
|
||||
log_error "Usage: restore_tenant <backup_file> <tenant_name>" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$backup_file" ]]; then
|
||||
log_error "Backup file not found: $backup_file" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_dir="$tenant_base/$tenant_name"
|
||||
|
||||
# Check if tenant already exists
|
||||
if [[ -d "$tenant_dir" ]]; then
|
||||
log_error "Tenant '$tenant_name' already exists. Delete it first or use a different name." "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Restore tenant
|
||||
tar -xzf "$backup_file" -C "$tenant_base"
|
||||
|
||||
log_success "Tenant '$tenant_name' restored from: $backup_file" "multi-tenant"
|
||||
}
|
||||
|
||||
# Tenant health check
|
||||
check_tenant_health() {
|
||||
local tenant_name="$1"
|
||||
|
||||
if [[ -z "$tenant_name" ]]; then
|
||||
log_error "Tenant name is required" "multi-tenant"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local tenant_base="${WORKSPACE}/tenants"
|
||||
local tenant_dir="$tenant_base/$tenant_name"
|
||||
local tenant_db="$tenant_base/tenants.json"
|
||||
|
||||
echo "Tenant Health Check: $tenant_name"
|
||||
echo "================================"
|
||||
|
||||
# Check tenant exists
|
||||
if [[ ! -d "$tenant_dir" ]]; then
|
||||
echo "â Tenant directory not found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! jq -e ".tenants[] | select(.name == \"$tenant_name\")" "$tenant_db" > /dev/null 2>&1; then
|
||||
echo "â Tenant not found in database"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "â Tenant exists"
|
||||
|
||||
# Check directory structure
|
||||
local missing_dirs=()
|
||||
for dir in layers deployments users audit backups config; do
|
||||
if [[ ! -d "$tenant_dir/$dir" ]]; then
|
||||
missing_dirs+=("$dir")
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ${#missing_dirs[@]} -gt 0 ]]; then
|
||||
echo "â ï¸ Missing directories: ${missing_dirs[*]}"
|
||||
else
|
||||
echo "â Directory structure complete"
|
||||
fi
|
||||
|
||||
# Check quota usage
|
||||
local tenant_info
|
||||
tenant_info=$(jq -r ".tenants[] | select(.name == \"$tenant_name\")" "$tenant_db")
|
||||
|
||||
local layers_used layers_max storage_used storage_max
|
||||
layers_used=$(echo "$tenant_info" | jq -r '.quotas.used_layers')
|
||||
layers_max=$(echo "$tenant_info" | jq -r '.quotas.max_layers')
|
||||
storage_used=$(echo "$tenant_info" | jq -r '.quotas.used_storage_gb')
|
||||
storage_max=$(echo "$tenant_info" | jq -r '.quotas.max_storage_gb')
|
||||
|
||||
echo "ð Resource Usage:"
|
||||
echo " Layers: $layers_used/$layers_max"
|
||||
echo " Storage: ${storage_used}GB/${storage_max}GB"
|
||||
|
||||
# Check for quota warnings
|
||||
local layer_percent=$((layers_used * 100 / layers_max))
|
||||
local storage_percent=$((storage_used * 100 / storage_max))
|
||||
|
||||
if [[ $layer_percent -gt 80 ]]; then
|
||||
echo "â ï¸ Layer quota usage high: ${layer_percent}%"
|
||||
fi
|
||||
|
||||
if [[ $storage_percent -gt 80 ]]; then
|
||||
echo "â ï¸ Storage quota usage high: ${storage_percent}%"
|
||||
fi
|
||||
|
||||
echo "â Tenant health check complete"
|
||||
}
|
||||
|
||||
# Multi-tenant command handler
|
||||
handle_multi_tenant_command() {
|
||||
local command="$1"
|
||||
shift
|
||||
|
||||
case "$command" in
|
||||
"init")
|
||||
init_multi_tenant_system
|
||||
;;
|
||||
"create")
|
||||
local tenant_name="$1"
|
||||
local config_file="$2"
|
||||
create_tenant "$tenant_name" "$config_file"
|
||||
;;
|
||||
"delete")
|
||||
local tenant_name="$1"
|
||||
local force="$2"
|
||||
delete_tenant "$tenant_name" "$force"
|
||||
;;
|
||||
"list")
|
||||
local format="$1"
|
||||
list_tenants "$format"
|
||||
;;
|
||||
"info")
|
||||
local tenant_name="$1"
|
||||
local format="$2"
|
||||
get_tenant_info "$tenant_name" "$format"
|
||||
;;
|
||||
"quota")
|
||||
local tenant_name="$1"
|
||||
local quota_type="$2"
|
||||
local value="$3"
|
||||
update_tenant_quotas "$tenant_name" "$quota_type" "$value"
|
||||
;;
|
||||
"backup")
|
||||
local tenant_name="$1"
|
||||
local backup_path="$2"
|
||||
backup_tenant "$tenant_name" "$backup_path"
|
||||
;;
|
||||
"restore")
|
||||
local backup_file="$1"
|
||||
local tenant_name="$2"
|
||||
restore_tenant "$backup_file" "$tenant_name"
|
||||
;;
|
||||
"health")
|
||||
local tenant_name="$1"
|
||||
check_tenant_health "$tenant_name"
|
||||
;;
|
||||
"help"|*)
|
||||
echo "Multi-Tenant Commands:"
|
||||
echo "====================="
|
||||
echo " init - Initialize multi-tenant system"
|
||||
echo " create <tenant> [config_file] - Create new tenant"
|
||||
echo " delete <tenant> [--force] - Delete tenant"
|
||||
echo " list [format] - List tenants (json|csv|table)"
|
||||
echo " info <tenant> [format] - Get tenant info (json|yaml|summary)"
|
||||
echo " quota <tenant> <type> <value> - Update tenant quota"
|
||||
echo " backup <tenant> [path] - Backup tenant"
|
||||
echo " restore <backup_file> <tenant> - Restore tenant"
|
||||
echo " health <tenant> - Check tenant health"
|
||||
echo " help - Show this help"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
|
@ -1,887 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Advanced Compliance Frameworks for apt-layer
|
||||
# Provides comprehensive compliance capabilities for enterprise deployments
|
||||
# Supports multiple compliance standards with automated reporting and validation
|
||||
|
||||
# Compliance framework configuration
|
||||
COMPLIANCE_ENABLED="${COMPLIANCE_ENABLED:-true}"
|
||||
COMPLIANCE_LEVEL="${COMPLIANCE_LEVEL:-enterprise}" # basic, enterprise, strict
|
||||
COMPLIANCE_AUTO_SCAN="${COMPLIANCE_AUTO_SCAN:-true}"
|
||||
COMPLIANCE_REPORTING="${COMPLIANCE_REPORTING:-true}"
|
||||
|
||||
# Supported compliance frameworks
|
||||
SUPPORTED_FRAMEWORKS=(
|
||||
"SOX" # Sarbanes-Oxley Act
|
||||
"PCI-DSS" # Payment Card Industry Data Security Standard
|
||||
"HIPAA" # Health Insurance Portability and Accountability Act
|
||||
"GDPR" # General Data Protection Regulation
|
||||
"ISO-27001" # Information Security Management
|
||||
"NIST-CSF" # NIST Cybersecurity Framework
|
||||
"CIS" # Center for Internet Security Controls
|
||||
"FEDRAMP" # Federal Risk and Authorization Management Program
|
||||
"SOC-2" # Service Organization Control 2
|
||||
"CMMC" # Cybersecurity Maturity Model Certification
|
||||
)
|
||||
|
||||
# Compliance framework initialization
|
||||
init_compliance_frameworks() {
|
||||
log_info "Initializing advanced compliance frameworks..." "compliance"
|
||||
|
||||
# Create compliance directories
|
||||
local compliance_base="${WORKSPACE}/compliance"
|
||||
mkdir -p "$compliance_base"
|
||||
mkdir -p "$compliance_base/frameworks"
|
||||
mkdir -p "$compliance_base/reports"
|
||||
mkdir -p "$compliance_base/templates"
|
||||
mkdir -p "$compliance_base/evidence"
|
||||
mkdir -p "$compliance_base/controls"
|
||||
|
||||
# Initialize compliance database
|
||||
local compliance_db="$compliance_base/compliance.json"
|
||||
if [[ ! -f "$compliance_db" ]]; then
|
||||
cat > "$compliance_db" << 'EOF'
|
||||
{
|
||||
"frameworks": {},
|
||||
"controls": {},
|
||||
"evidence": {},
|
||||
"reports": {},
|
||||
"metadata": {
|
||||
"created": "",
|
||||
"version": "1.0",
|
||||
"last_scan": null
|
||||
}
|
||||
}
|
||||
EOF
|
||||
# Set creation timestamp
|
||||
jq --arg created "$(date -Iseconds)" '.metadata.created = $created' "$compliance_db" > "$compliance_db.tmp" && mv "$compliance_db.tmp" "$compliance_db"
|
||||
fi
|
||||
|
||||
# Initialize framework templates
|
||||
init_framework_templates
|
||||
|
||||
log_success "Advanced compliance frameworks initialized" "compliance"
|
||||
}
|
||||
|
||||
# Initialize framework templates
|
||||
init_framework_templates() {
|
||||
local templates_dir="${WORKSPACE}/compliance/templates"
|
||||
|
||||
# SOX Template
|
||||
cat > "$templates_dir/sox.json" << 'EOF'
|
||||
{
|
||||
"name": "SOX",
|
||||
"version": "2024",
|
||||
"description": "Sarbanes-Oxley Act Compliance",
|
||||
"controls": {
|
||||
"SOX-001": {
|
||||
"title": "Access Control",
|
||||
"description": "Ensure proper access controls are in place",
|
||||
"category": "Access Management",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"User authentication and authorization",
|
||||
"Role-based access control",
|
||||
"Access logging and monitoring"
|
||||
]
|
||||
},
|
||||
"SOX-002": {
|
||||
"title": "Change Management",
|
||||
"description": "Implement proper change management procedures",
|
||||
"category": "Change Management",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"Change approval process",
|
||||
"Change documentation",
|
||||
"Change testing and validation"
|
||||
]
|
||||
},
|
||||
"SOX-003": {
|
||||
"title": "Data Integrity",
|
||||
"description": "Ensure data integrity and accuracy",
|
||||
"category": "Data Management",
|
||||
"severity": "critical",
|
||||
"requirements": [
|
||||
"Data validation",
|
||||
"Backup and recovery",
|
||||
"Audit trails"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# PCI-DSS Template
|
||||
cat > "$templates_dir/pci-dss.json" << 'EOF'
|
||||
{
|
||||
"name": "PCI-DSS",
|
||||
"version": "4.0",
|
||||
"description": "Payment Card Industry Data Security Standard",
|
||||
"controls": {
|
||||
"PCI-001": {
|
||||
"title": "Build and Maintain a Secure Network",
|
||||
"description": "Install and maintain a firewall configuration",
|
||||
"category": "Network Security",
|
||||
"severity": "critical",
|
||||
"requirements": [
|
||||
"Firewall configuration",
|
||||
"Network segmentation",
|
||||
"Security testing"
|
||||
]
|
||||
},
|
||||
"PCI-002": {
|
||||
"title": "Protect Cardholder Data",
|
||||
"description": "Protect stored cardholder data",
|
||||
"category": "Data Protection",
|
||||
"severity": "critical",
|
||||
"requirements": [
|
||||
"Data encryption",
|
||||
"Key management",
|
||||
"Data retention policies"
|
||||
]
|
||||
},
|
||||
"PCI-003": {
|
||||
"title": "Maintain Vulnerability Management",
|
||||
"description": "Use and regularly update anti-virus software",
|
||||
"category": "Vulnerability Management",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"Anti-virus software",
|
||||
"Vulnerability scanning",
|
||||
"Patch management"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# HIPAA Template
|
||||
cat > "$templates_dir/hipaa.json" << 'EOF'
|
||||
{
|
||||
"name": "HIPAA",
|
||||
"version": "2024",
|
||||
"description": "Health Insurance Portability and Accountability Act",
|
||||
"controls": {
|
||||
"HIPAA-001": {
|
||||
"title": "Administrative Safeguards",
|
||||
"description": "Implement administrative safeguards for PHI",
|
||||
"category": "Administrative",
|
||||
"severity": "critical",
|
||||
"requirements": [
|
||||
"Security officer designation",
|
||||
"Workforce training",
|
||||
"Incident response procedures"
|
||||
]
|
||||
},
|
||||
"HIPAA-002": {
|
||||
"title": "Physical Safeguards",
|
||||
"description": "Implement physical safeguards for PHI",
|
||||
"category": "Physical",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"Facility access controls",
|
||||
"Workstation security",
|
||||
"Device and media controls"
|
||||
]
|
||||
},
|
||||
"HIPAA-003": {
|
||||
"title": "Technical Safeguards",
|
||||
"description": "Implement technical safeguards for PHI",
|
||||
"category": "Technical",
|
||||
"severity": "critical",
|
||||
"requirements": [
|
||||
"Access control",
|
||||
"Audit controls",
|
||||
"Transmission security"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# GDPR Template
|
||||
cat > "$templates_dir/gdpr.json" << 'EOF'
|
||||
{
|
||||
"name": "GDPR",
|
||||
"version": "2018",
|
||||
"description": "General Data Protection Regulation",
|
||||
"controls": {
|
||||
"GDPR-001": {
|
||||
"title": "Data Protection by Design",
|
||||
"description": "Implement data protection by design and by default",
|
||||
"category": "Privacy by Design",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"Privacy impact assessments",
|
||||
"Data minimization",
|
||||
"Default privacy settings"
|
||||
]
|
||||
},
|
||||
"GDPR-002": {
|
||||
"title": "Data Subject Rights",
|
||||
"description": "Ensure data subject rights are protected",
|
||||
"category": "Data Subject Rights",
|
||||
"severity": "critical",
|
||||
"requirements": [
|
||||
"Right to access",
|
||||
"Right to rectification",
|
||||
"Right to erasure"
|
||||
]
|
||||
},
|
||||
"GDPR-003": {
|
||||
"title": "Data Breach Notification",
|
||||
"description": "Implement data breach notification procedures",
|
||||
"category": "Incident Response",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"Breach detection",
|
||||
"Notification procedures",
|
||||
"Documentation requirements"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# ISO-27001 Template
|
||||
cat > "$templates_dir/iso-27001.json" << 'EOF'
|
||||
{
|
||||
"name": "ISO-27001",
|
||||
"version": "2022",
|
||||
"description": "Information Security Management System",
|
||||
"controls": {
|
||||
"ISO-001": {
|
||||
"title": "Information Security Policies",
|
||||
"description": "Define information security policies",
|
||||
"category": "Policies",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"Policy framework",
|
||||
"Policy review",
|
||||
"Policy communication"
|
||||
]
|
||||
},
|
||||
"ISO-002": {
|
||||
"title": "Organization of Information Security",
|
||||
"description": "Establish information security organization",
|
||||
"category": "Organization",
|
||||
"severity": "high",
|
||||
"requirements": [
|
||||
"Security roles",
|
||||
"Segregation of duties",
|
||||
"Contact with authorities"
|
||||
]
|
||||
},
|
||||
"ISO-003": {
|
||||
"title": "Human Resource Security",
|
||||
"description": "Ensure security in human resources",
|
||||
"category": "Human Resources",
|
||||
"severity": "medium",
|
||||
"requirements": [
|
||||
"Screening",
|
||||
"Terms and conditions",
|
||||
"Security awareness"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
log_info "Framework templates initialized" "compliance"
|
||||
}
|
||||
|
||||
# Framework management functions
|
||||
enable_framework() {
|
||||
local framework_name="$1"
|
||||
local config_file="$2"
|
||||
|
||||
if [[ -z "$framework_name" ]]; then
|
||||
log_error "Framework name is required" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate framework name
|
||||
local valid_framework=false
|
||||
for framework in "${SUPPORTED_FRAMEWORKS[@]}"; do
|
||||
if [[ "$framework" == "$framework_name" ]]; then
|
||||
valid_framework=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$valid_framework" != "true" ]]; then
|
||||
log_error "Unsupported framework: $framework_name" "compliance"
|
||||
log_info "Supported frameworks: ${SUPPORTED_FRAMEWORKS[*]}" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local compliance_base="${WORKSPACE}/compliance"
|
||||
local compliance_db="$compliance_base/compliance.json"
|
||||
local template_file="$compliance_base/templates/${framework_name,,}.json"
|
||||
|
||||
# Check if framework template exists
|
||||
if [[ ! -f "$template_file" ]]; then
|
||||
log_error "Framework template not found: $template_file" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Load template
|
||||
local template_data
|
||||
template_data=$(jq -r '.' "$template_file")
|
||||
|
||||
# Merge custom configuration if provided
|
||||
if [[ -n "$config_file" && -f "$config_file" ]]; then
|
||||
if jq empty "$config_file" 2>/dev/null; then
|
||||
template_data=$(jq -s '.[0] * .[1]' <(echo "$template_data") "$config_file")
|
||||
else
|
||||
log_warning "Invalid JSON in framework configuration, using template defaults" "compliance"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Add framework to database
|
||||
jq --arg name "$framework_name" --argjson data "$template_data" \
|
||||
'.frameworks[$name] = $data' "$compliance_db" > "$compliance_db.tmp" && mv "$compliance_db.tmp" "$compliance_db"
|
||||
|
||||
log_success "Framework '$framework_name' enabled successfully" "compliance"
|
||||
}
|
||||
|
||||
disable_framework() {
|
||||
local framework_name="$1"
|
||||
|
||||
if [[ -z "$framework_name" ]]; then
|
||||
log_error "Framework name is required" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local compliance_base="${WORKSPACE}/compliance"
|
||||
local compliance_db="$compliance_base/compliance.json"
|
||||
|
||||
# Remove framework from database
|
||||
jq --arg name "$framework_name" 'del(.frameworks[$name])' "$compliance_db" > "$compliance_db.tmp" && mv "$compliance_db.tmp" "$compliance_db"
|
||||
|
||||
log_success "Framework '$framework_name' disabled successfully" "compliance"
|
||||
}
|
||||
|
||||
list_frameworks() {
|
||||
local format="${1:-table}"
|
||||
local compliance_base="${WORKSPACE}/compliance"
|
||||
local compliance_db="$compliance_base/compliance.json"
|
||||
|
||||
if [[ ! -f "$compliance_db" ]]; then
|
||||
log_error "Compliance database not found" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"json")
|
||||
jq -r '.frameworks' "$compliance_db"
|
||||
;;
|
||||
"csv")
|
||||
echo "framework,version,description,controls_count"
|
||||
jq -r '.frameworks | to_entries[] | [.key, .value.version, .value.description, (.value.controls | length)] | @csv' "$compliance_db"
|
||||
;;
|
||||
"table"|*)
|
||||
echo "Enabled Compliance Frameworks:"
|
||||
echo "=============================="
|
||||
jq -r '.frameworks | to_entries[] | "\(.key) (\(.value.version)) - \(.value.description)"' "$compliance_db"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Compliance scanning and assessment
|
||||
run_compliance_scan() {
|
||||
local framework_name="$1"
|
||||
local scan_level="${2:-standard}" # quick, standard, thorough
|
||||
|
||||
if [[ -z "$framework_name" ]]; then
|
||||
log_error "Framework name is required" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local compliance_base="${WORKSPACE}/compliance"
|
||||
local compliance_db="$compliance_base/compliance.json"
|
||||
|
||||
# Check if framework is enabled
|
||||
if ! jq -e ".frameworks[\"$framework_name\"]" "$compliance_db" > /dev/null 2>&1; then
|
||||
log_error "Framework '$framework_name' is not enabled" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Running compliance scan for framework: $framework_name (level: $scan_level)" "compliance"
|
||||
|
||||
# Create scan report
|
||||
local scan_id="scan-$(date +%Y%m%d-%H%M%S)"
|
||||
local report_file="$compliance_base/reports/${framework_name}-${scan_id}.json"
|
||||
|
||||
# Initialize report structure
|
||||
local report_data
|
||||
report_data=$(cat << 'EOF'
|
||||
{
|
||||
"scan_id": "$scan_id",
|
||||
"framework": "$framework_name",
|
||||
"scan_level": "$scan_level",
|
||||
"timestamp": "$(date -Iseconds)",
|
||||
"results": {},
|
||||
"summary": {
|
||||
"total_controls": 0,
|
||||
"passed": 0,
|
||||
"failed": 0,
|
||||
"warnings": 0,
|
||||
"not_applicable": 0
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# Get framework controls
|
||||
local controls
|
||||
controls=$(jq -r ".frameworks[\"$framework_name\"].controls" "$compliance_db")
|
||||
|
||||
# Scan each control
|
||||
local total_controls=0
|
||||
local passed_controls=0
|
||||
local failed_controls=0
|
||||
local warning_controls=0
|
||||
local na_controls=0
|
||||
|
||||
while IFS= read -r control_id; do
|
||||
if [[ -n "$control_id" ]]; then
|
||||
total_controls=$((total_controls + 1))
|
||||
|
||||
# Assess control compliance
|
||||
local control_result
|
||||
control_result=$(assess_control_compliance "$framework_name" "$control_id" "$scan_level")
|
||||
|
||||
# Parse result
|
||||
local status
|
||||
status=$(echo "$control_result" | jq -r '.status')
|
||||
|
||||
case "$status" in
|
||||
"PASS")
|
||||
passed_controls=$((passed_controls + 1))
|
||||
;;
|
||||
"FAIL")
|
||||
failed_controls=$((failed_controls + 1))
|
||||
;;
|
||||
"WARNING")
|
||||
warning_controls=$((warning_controls + 1))
|
||||
;;
|
||||
"N/A")
|
||||
na_controls=$((na_controls + 1))
|
||||
;;
|
||||
esac
|
||||
|
||||
# Add to report
|
||||
report_data=$(echo "$report_data" | jq --arg id "$control_id" --argjson result "$control_result" '.results[$id] = $result')
|
||||
fi
|
||||
done < <(echo "$controls" | jq -r 'keys[]')
|
||||
|
||||
# Update summary
|
||||
report_data=$(echo "$report_data" | jq --argjson total $total_controls --argjson passed $passed_controls --argjson failed $failed_controls --argjson warnings $warning_controls --argjson na $na_controls \
|
||||
'.summary.total_controls = $total | .summary.passed = $passed | .summary.failed = $failed | .summary.warnings = $warnings | .summary.not_applicable = $na')
|
||||
|
||||
# Save report
|
||||
echo "$report_data" > "$report_file"
|
||||
|
||||
# Update compliance database
|
||||
jq --arg framework "$framework_name" --arg scan_id "$scan_id" --arg report_file "$report_file" \
|
||||
'.reports[$framework] = {"last_scan": $scan_id, "report_file": $report_file}' "$compliance_db" > "$compliance_db.tmp" && mv "$compliance_db.tmp" "$compliance_db"
|
||||
|
||||
log_success "Compliance scan completed: $scan_id" "compliance"
|
||||
log_info "Report saved to: $report_file" "compliance"
|
||||
|
||||
# Print summary
|
||||
echo "Compliance Scan Summary:"
|
||||
echo "========================"
|
||||
echo "Framework: $framework_name"
|
||||
echo "Scan Level: $scan_level"
|
||||
echo "Total Controls: $total_controls"
|
||||
echo "Passed: $passed_controls"
|
||||
echo "Failed: $failed_controls"
|
||||
echo "Warnings: $warning_controls"
|
||||
echo "Not Applicable: $na_controls"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Control assessment
|
||||
assess_control_compliance() {
|
||||
local framework_name="$1"
|
||||
local control_id="$2"
|
||||
local scan_level="$3"
|
||||
|
||||
local compliance_base="${WORKSPACE}/compliance"
|
||||
local compliance_db="$compliance_base/compliance.json"
|
||||
|
||||
# Get control details
|
||||
local control_info
|
||||
control_info=$(jq -r ".frameworks[\"$framework_name\"].controls[\"$control_id\"]" "$compliance_db")
|
||||
|
||||
local control_title
|
||||
control_title=$(echo "$control_info" | jq -r '.title')
|
||||
local control_category
|
||||
control_category=$(echo "$control_info" | jq -r '.category')
|
||||
local control_severity
|
||||
control_severity=$(echo "$control_info" | jq -r '.severity')
|
||||
|
||||
# Perform control-specific assessment
|
||||
local status="PASS"
|
||||
local evidence=""
|
||||
local findings=""
|
||||
|
||||
case "$control_id" in
|
||||
"SOX-001"|"PCI-001"|"HIPAA-003"|"ISO-002")
|
||||
# Access Control assessment
|
||||
if check_access_controls; then
|
||||
status="PASS"
|
||||
evidence="Access controls properly configured"
|
||||
else
|
||||
status="FAIL"
|
||||
evidence="Access controls not properly configured"
|
||||
findings="Missing role-based access control implementation"
|
||||
fi
|
||||
;;
|
||||
"SOX-002"|"PCI-003"|"ISO-001")
|
||||
# Change Management assessment
|
||||
if check_change_management; then
|
||||
status="PASS"
|
||||
evidence="Change management procedures in place"
|
||||
else
|
||||
status="WARNING"
|
||||
evidence="Change management procedures need improvement"
|
||||
findings="Documentation of change procedures incomplete"
|
||||
fi
|
||||
;;
|
||||
"SOX-003"|"PCI-002"|"HIPAA-002")
|
||||
# Data Protection assessment
|
||||
if check_data_protection; then
|
||||
status="PASS"
|
||||
evidence="Data protection measures implemented"
|
||||
else
|
||||
status="FAIL"
|
||||
evidence="Data protection measures insufficient"
|
||||
findings="Encryption not properly configured"
|
||||
fi
|
||||
;;
|
||||
"GDPR-001"|"GDPR-002"|"GDPR-003")
|
||||
# Privacy assessment
|
||||
if check_privacy_controls; then
|
||||
status="PASS"
|
||||
evidence="Privacy controls implemented"
|
||||
else
|
||||
status="WARNING"
|
||||
evidence="Privacy controls need enhancement"
|
||||
findings="Data minimization not fully implemented"
|
||||
fi
|
||||
;;
|
||||
"HIPAA-001")
|
||||
# Administrative safeguards
|
||||
if check_administrative_safeguards; then
|
||||
status="PASS"
|
||||
evidence="Administrative safeguards in place"
|
||||
else
|
||||
status="FAIL"
|
||||
evidence="Administrative safeguards missing"
|
||||
findings="Security officer not designated"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
# Default assessment
|
||||
status="N/A"
|
||||
evidence="Control not implemented in assessment engine"
|
||||
findings="Manual assessment required"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Create result JSON
|
||||
cat << 'EOF'
|
||||
{
|
||||
"control_id": "$control_id",
|
||||
"title": "$control_title",
|
||||
"category": "$control_category",
|
||||
"severity": "$control_severity",
|
||||
"status": "$status",
|
||||
"evidence": "$evidence",
|
||||
"findings": "$findings",
|
||||
"assessment_time": "$(date -Iseconds)"
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
# Control check functions (stubs for now)
|
||||
check_access_controls() {
|
||||
# Check if access controls are properly configured
|
||||
# This would check user management, role assignments, etc.
|
||||
local user_count
|
||||
user_count=$(jq -r '.users | length' "${WORKSPACE}/users.json" 2>/dev/null || echo "0")
|
||||
|
||||
if [[ $user_count -gt 0 ]]; then
|
||||
return 0 # Pass
|
||||
else
|
||||
return 1 # Fail
|
||||
fi
|
||||
}
|
||||
|
||||
check_change_management() {
|
||||
# Check if change management procedures are in place
|
||||
# This would check for change logs, approval processes, etc.
|
||||
local audit_logs
|
||||
audit_logs=$(find "${WORKSPACE}/audit" -name "*.log" 2>/dev/null | wc -l)
|
||||
|
||||
if [[ $audit_logs -gt 0 ]]; then
|
||||
return 0 # Pass
|
||||
else
|
||||
return 1 # Fail
|
||||
fi
|
||||
}
|
||||
|
||||
check_data_protection() {
|
||||
# Check if data protection measures are implemented
|
||||
# This would check encryption, backup procedures, etc.
|
||||
local backup_count
|
||||
backup_count=$(find "${WORKSPACE}/backups" -name "*.tar.gz" 2>/dev/null | wc -l)
|
||||
|
||||
if [[ $backup_count -gt 0 ]]; then
|
||||
return 0 # Pass
|
||||
else
|
||||
return 1 # Fail
|
||||
fi
|
||||
}
|
||||
|
||||
check_privacy_controls() {
|
||||
# Check if privacy controls are implemented
|
||||
# This would check data minimization, consent management, etc.
|
||||
# For now, return pass if audit system is enabled
|
||||
if [[ "$COMPLIANCE_ENABLED" == "true" ]]; then
|
||||
return 0 # Pass
|
||||
else
|
||||
return 1 # Fail
|
||||
fi
|
||||
}
|
||||
|
||||
check_administrative_safeguards() {
|
||||
# Check if administrative safeguards are in place
|
||||
# This would check security officer designation, training, etc.
|
||||
# For now, return pass if compliance system is initialized
|
||||
local compliance_db="${WORKSPACE}/compliance/compliance.json"
|
||||
if [[ -f "$compliance_db" ]]; then
|
||||
return 0 # Pass
|
||||
else
|
||||
return 1 # Fail
|
||||
fi
|
||||
}
|
||||
|
||||
# Compliance reporting
|
||||
generate_compliance_report() {
|
||||
local framework_name="$1"
|
||||
local report_format="${2:-html}"
|
||||
local report_period="${3:-monthly}"
|
||||
|
||||
if [[ -z "$framework_name" ]]; then
|
||||
log_error "Framework name is required" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local compliance_base="${WORKSPACE}/compliance"
|
||||
local compliance_db="$compliance_base/compliance.json"
|
||||
|
||||
# Check if framework is enabled
|
||||
if ! jq -e ".frameworks[\"$framework_name\"]" "$compliance_db" > /dev/null 2>&1; then
|
||||
log_error "Framework '$framework_name' is not enabled" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get latest scan report
|
||||
local report_file
|
||||
report_file=$(jq -r ".reports[\"$framework_name\"].report_file" "$compliance_db" 2>/dev/null)
|
||||
|
||||
if [[ -z "$report_file" || "$report_file" == "null" ]]; then
|
||||
log_error "No scan report found for framework '$framework_name'" "compliance"
|
||||
log_info "Run a compliance scan first: compliance scan $framework_name" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$report_file" ]]; then
|
||||
log_error "Report file not found: $report_file" "compliance"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Generate report based on format
|
||||
case "$report_format" in
|
||||
"html")
|
||||
generate_html_compliance_report "$framework_name" "$report_file"
|
||||
;;
|
||||
"json")
|
||||
generate_json_compliance_report "$framework_name" "$report_file"
|
||||
;;
|
||||
"pdf")
|
||||
generate_pdf_compliance_report "$framework_name" "$report_file"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported report format: $report_format" "compliance"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
generate_html_compliance_report() {
|
||||
local framework_name="$1"
|
||||
local report_file="$2"
|
||||
|
||||
local report_data
|
||||
report_data=$(jq -r '.' "$report_file")
|
||||
|
||||
local output_file="${WORKSPACE}/compliance/reports/${framework_name}-report-$(date +%Y%m%d).html"
|
||||
|
||||
# Generate HTML report
|
||||
cat > "$output_file" << 'EOF'
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Compliance Report - $framework_name</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.header { background-color: #f0f0f0; padding: 20px; border-radius: 5px; }
|
||||
.summary { margin: 20px 0; }
|
||||
.control { margin: 10px 0; padding: 10px; border: 1px solid #ddd; border-radius: 3px; }
|
||||
.pass { background-color: #d4edda; border-color: #c3e6cb; }
|
||||
.fail { background-color: #f8d7da; border-color: #f5c6cb; }
|
||||
.warning { background-color: #fff3cd; border-color: #ffeaa7; }
|
||||
.na { background-color: #e2e3e5; border-color: #d6d8db; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>Compliance Report - $framework_name</h1>
|
||||
<p>Generated: $(date)</p>
|
||||
<p>Scan ID: $(echo "$report_data" | jq -r '.scan_id')</p>
|
||||
</div>
|
||||
|
||||
<div class="summary">
|
||||
<h2>Summary</h2>
|
||||
<p>Total Controls: $(echo "$report_data" | jq -r '.summary.total_controls')</p>
|
||||
<p>Passed: $(echo "$report_data" | jq -r '.summary.passed')</p>
|
||||
<p>Failed: $(echo "$report_data" | jq -r '.summary.failed')</p>
|
||||
<p>Warnings: $(echo "$report_data" | jq -r '.summary.warnings')</p>
|
||||
<p>Not Applicable: $(echo "$report_data" | jq -r '.summary.not_applicable')</p>
|
||||
</div>
|
||||
|
||||
<div class="controls">
|
||||
<h2>Control Results</h2>
|
||||
EOF
|
||||
|
||||
# Add control results
|
||||
echo "$report_data" | jq -r '.results | to_entries[] | "\(.key): \(.value.status)"' | while IFS=':' read -r control_id status; do
|
||||
local control_data
|
||||
control_data=$(echo "$report_data" | jq -r ".results[\"$control_id\"]")
|
||||
local title
|
||||
title=$(echo "$control_data" | jq -r '.title')
|
||||
local evidence
|
||||
evidence=$(echo "$control_data" | jq -r '.evidence')
|
||||
local findings
|
||||
findings=$(echo "$control_data" | jq -r '.findings')
|
||||
|
||||
cat >> "$output_file" << 'EOF'
|
||||
<div class="control $status">
|
||||
<h3>$control_id - $title</h3>
|
||||
<p><strong>Status:</strong> $status</p>
|
||||
<p><strong>Evidence:</strong> $evidence</p>
|
||||
EOF
|
||||
|
||||
if [[ -n "$findings" && "$findings" != "null" ]]; then
|
||||
cat >> "$output_file" << 'EOF'
|
||||
<p><strong>Findings:</strong> $findings</p>
|
||||
EOF
|
||||
fi
|
||||
|
||||
cat >> "$output_file" << 'EOF'
|
||||
</div>
|
||||
EOF
|
||||
done
|
||||
|
||||
cat >> "$output_file" << 'EOF'
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
|
||||
log_success "HTML compliance report generated: $output_file" "compliance"
|
||||
}
|
||||
|
||||
generate_json_compliance_report() {
|
||||
local framework_name="$1"
|
||||
local report_file="$2"
|
||||
|
||||
local output_file="${WORKSPACE}/compliance/reports/${framework_name}-report-$(date +%Y%m%d).json"
|
||||
|
||||
# Copy and enhance the report
|
||||
jq --arg framework "$framework_name" --arg generated "$(date -Iseconds)" \
|
||||
'. + {"framework": $framework, "report_generated": $generated}' "$report_file" > "$output_file"
|
||||
|
||||
log_success "JSON compliance report generated: $output_file" "compliance"
|
||||
}
|
||||
|
||||
generate_pdf_compliance_report() {
|
||||
local framework_name="$1"
|
||||
local report_file="$2"
|
||||
|
||||
local output_file="${WORKSPACE}/compliance/reports/${framework_name}-report-$(date +%Y%m%d).pdf"
|
||||
|
||||
# For now, generate HTML and suggest conversion
|
||||
local html_file="${WORKSPACE}/compliance/reports/${framework_name}-report-$(date +%Y%m%d).html"
|
||||
generate_html_compliance_report "$framework_name" "$report_file"
|
||||
|
||||
log_warning "PDF generation not implemented" "compliance"
|
||||
log_info "HTML report generated: $html_file" "compliance"
|
||||
log_info "Convert to PDF manually or use tools like wkhtmltopdf" "compliance"
|
||||
}
|
||||
|
||||
# Compliance command handler
|
||||
handle_compliance_command() {
|
||||
local command="$1"
|
||||
shift
|
||||
|
||||
case "$command" in
|
||||
"init")
|
||||
init_compliance_frameworks
|
||||
;;
|
||||
"enable")
|
||||
local framework_name="$1"
|
||||
local config_file="$2"
|
||||
enable_framework "$framework_name" "$config_file"
|
||||
;;
|
||||
"disable")
|
||||
local framework_name="$1"
|
||||
disable_framework "$framework_name"
|
||||
;;
|
||||
"list")
|
||||
local format="$1"
|
||||
list_frameworks "$format"
|
||||
;;
|
||||
"scan")
|
||||
local framework_name="$1"
|
||||
local scan_level="$2"
|
||||
run_compliance_scan "$framework_name" "$scan_level"
|
||||
;;
|
||||
"report")
|
||||
local framework_name="$1"
|
||||
local format="$2"
|
||||
local period="$3"
|
||||
generate_compliance_report "$framework_name" "$format" "$period"
|
||||
;;
|
||||
"help"|*)
|
||||
echo "Advanced Compliance Framework Commands:"
|
||||
echo "======================================"
|
||||
echo " init - Initialize compliance frameworks"
|
||||
echo " enable <framework> [config_file] - Enable compliance framework"
|
||||
echo " disable <framework> - Disable compliance framework"
|
||||
echo " list [format] - List enabled frameworks (json|csv|table)"
|
||||
echo " scan <framework> [level] - Run compliance scan (quick|standard|thorough)"
|
||||
echo " report <framework> [format] [period] - Generate compliance report (html|json|pdf)"
|
||||
echo " help - Show this help"
|
||||
echo ""
|
||||
echo "Supported Frameworks:"
|
||||
echo " SOX, PCI-DSS, HIPAA, GDPR, ISO-27001, NIST-CSF, CIS, FEDRAMP, SOC-2, CMMC"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
|
@ -1,752 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Enterprise Integration for apt-layer
|
||||
# Provides hooks and integrations with enterprise tools and systems
|
||||
# Supports SIEM, ticketing, monitoring, and other enterprise integrations
|
||||
|
||||
# Enterprise integration configuration
|
||||
ENTERPRISE_INTEGRATION_ENABLED="${ENTERPRISE_INTEGRATION_ENABLED:-true}"
|
||||
ENTERPRISE_INTEGRATION_LEVEL="${ENTERPRISE_INTEGRATION_LEVEL:-basic}" # basic, standard, advanced
|
||||
ENTERPRISE_INTEGRATION_TIMEOUT="${ENTERPRISE_INTEGRATION_TIMEOUT:-30}"
|
||||
ENTERPRISE_INTEGRATION_RETRY="${ENTERPRISE_INTEGRATION_RETRY:-3}"
|
||||
|
||||
# Supported enterprise integrations
|
||||
SUPPORTED_INTEGRATIONS=(
|
||||
"SIEM" # Security Information and Event Management
|
||||
"TICKETING" # IT Service Management / Ticketing
|
||||
"MONITORING" # System monitoring and alerting
|
||||
"CMDB" # Configuration Management Database
|
||||
"BACKUP" # Enterprise backup systems
|
||||
"SECURITY" # Security tools and platforms
|
||||
"COMPLIANCE" # Compliance and governance tools
|
||||
"DEVOPS" # DevOps and CI/CD tools
|
||||
"CLOUD" # Cloud platform integrations
|
||||
"CUSTOM" # Custom enterprise integrations
|
||||
)
|
||||
|
||||
# Enterprise integration initialization
|
||||
init_enterprise_integration() {
|
||||
log_info "Initializing enterprise integration system..." "enterprise"
|
||||
|
||||
# Create enterprise integration directories
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
mkdir -p "$enterprise_base"
|
||||
mkdir -p "$enterprise_base/integrations"
|
||||
mkdir -p "$enterprise_base/hooks"
|
||||
mkdir -p "$enterprise_base/configs"
|
||||
mkdir -p "$enterprise_base/logs"
|
||||
mkdir -p "$enterprise_base/templates"
|
||||
|
||||
# Initialize enterprise integration database
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
if [[ ! -f "$enterprise_db" ]]; then
|
||||
cat > "$enterprise_db" << 'EOF'
|
||||
{
|
||||
"integrations": {},
|
||||
"hooks": {},
|
||||
"configs": {},
|
||||
"metadata": {
|
||||
"created": "",
|
||||
"version": "1.0",
|
||||
"last_sync": null
|
||||
}
|
||||
}
|
||||
EOF
|
||||
# Set creation timestamp
|
||||
jq --arg created "$(date -Iseconds)" '.metadata.created = $created' "$enterprise_db" > "$enterprise_db.tmp" && mv "$enterprise_db.tmp" "$enterprise_db"
|
||||
fi
|
||||
|
||||
# Initialize integration templates
|
||||
init_integration_templates
|
||||
|
||||
log_success "Enterprise integration system initialized" "enterprise"
|
||||
}
|
||||
|
||||
# Initialize integration templates
|
||||
init_integration_templates() {
|
||||
local templates_dir="${WORKSPACE}/enterprise/templates"
|
||||
|
||||
# SIEM Integration Template
|
||||
cat > "$templates_dir/siem.json" << 'EOF'
|
||||
{
|
||||
"name": "SIEM",
|
||||
"type": "security",
|
||||
"description": "Security Information and Event Management Integration",
|
||||
"endpoints": {
|
||||
"events": "https://siem.example.com/api/v1/events",
|
||||
"alerts": "https://siem.example.com/api/v1/alerts",
|
||||
"incidents": "https://siem.example.com/api/v1/incidents"
|
||||
},
|
||||
"authentication": {
|
||||
"type": "api_key",
|
||||
"header": "X-API-Key"
|
||||
},
|
||||
"events": {
|
||||
"layer_created": true,
|
||||
"layer_deleted": true,
|
||||
"security_scan": true,
|
||||
"compliance_scan": true,
|
||||
"user_action": true,
|
||||
"system_event": true
|
||||
},
|
||||
"format": "json",
|
||||
"retry_policy": {
|
||||
"max_retries": 3,
|
||||
"backoff_multiplier": 2,
|
||||
"timeout": 30
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Ticketing Integration Template
|
||||
cat > "$templates_dir/ticketing.json" << 'EOF'
|
||||
{
|
||||
"name": "TICKETING",
|
||||
"type": "service_management",
|
||||
"description": "IT Service Management / Ticketing System Integration",
|
||||
"endpoints": {
|
||||
"tickets": "https://ticketing.example.com/api/v2/tickets",
|
||||
"incidents": "https://ticketing.example.com/api/v2/incidents",
|
||||
"changes": "https://ticketing.example.com/api/v2/changes"
|
||||
},
|
||||
"authentication": {
|
||||
"type": "basic_auth",
|
||||
"username": "service_account",
|
||||
"password": "encrypted_password"
|
||||
},
|
||||
"triggers": {
|
||||
"security_incident": true,
|
||||
"compliance_violation": true,
|
||||
"system_failure": true,
|
||||
"maintenance_required": true,
|
||||
"user_request": true
|
||||
},
|
||||
"format": "json",
|
||||
"priority_mapping": {
|
||||
"critical": "P1",
|
||||
"high": "P2",
|
||||
"medium": "P3",
|
||||
"low": "P4"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Monitoring Integration Template
|
||||
cat > "$templates_dir/monitoring.json" << 'EOF'
|
||||
{
|
||||
"name": "MONITORING",
|
||||
"type": "monitoring",
|
||||
"description": "System Monitoring and Alerting Integration",
|
||||
"endpoints": {
|
||||
"metrics": "https://monitoring.example.com/api/v1/metrics",
|
||||
"alerts": "https://monitoring.example.com/api/v1/alerts",
|
||||
"health": "https://monitoring.example.com/api/v1/health"
|
||||
},
|
||||
"authentication": {
|
||||
"type": "bearer_token",
|
||||
"token": "encrypted_token"
|
||||
},
|
||||
"metrics": {
|
||||
"layer_count": true,
|
||||
"storage_usage": true,
|
||||
"security_status": true,
|
||||
"compliance_status": true,
|
||||
"user_activity": true,
|
||||
"system_performance": true
|
||||
},
|
||||
"format": "json",
|
||||
"collection_interval": 300
|
||||
}
|
||||
EOF
|
||||
|
||||
# CMDB Integration Template
|
||||
cat > "$templates_dir/cmdb.json" << 'EOF'
|
||||
{
|
||||
"name": "CMDB",
|
||||
"type": "configuration_management",
|
||||
"description": "Configuration Management Database Integration",
|
||||
"endpoints": {
|
||||
"assets": "https://cmdb.example.com/api/v1/assets",
|
||||
"configurations": "https://cmdb.example.com/api/v1/configurations",
|
||||
"relationships": "https://cmdb.example.com/api/v1/relationships"
|
||||
},
|
||||
"authentication": {
|
||||
"type": "oauth2",
|
||||
"client_id": "apt_layer_client",
|
||||
"client_secret": "encrypted_secret"
|
||||
},
|
||||
"assets": {
|
||||
"layers": true,
|
||||
"deployments": true,
|
||||
"users": true,
|
||||
"configurations": true,
|
||||
"dependencies": true
|
||||
},
|
||||
"format": "json",
|
||||
"sync_interval": 3600
|
||||
}
|
||||
EOF
|
||||
|
||||
# DevOps Integration Template
|
||||
cat > "$templates_dir/devops.json" << 'EOF'
|
||||
{
|
||||
"name": "DEVOPS",
|
||||
"type": "devops",
|
||||
"description": "DevOps and CI/CD Tools Integration",
|
||||
"endpoints": {
|
||||
"pipelines": "https://devops.example.com/api/v1/pipelines",
|
||||
"deployments": "https://devops.example.com/api/v1/deployments",
|
||||
"artifacts": "https://devops.example.com/api/v1/artifacts"
|
||||
},
|
||||
"authentication": {
|
||||
"type": "service_account",
|
||||
"token": "encrypted_token"
|
||||
},
|
||||
"triggers": {
|
||||
"layer_ready": true,
|
||||
"deployment_complete": true,
|
||||
"security_approved": true,
|
||||
"compliance_verified": true
|
||||
},
|
||||
"format": "json",
|
||||
"webhook_url": "https://devops.example.com/webhooks/apt-layer"
|
||||
}
|
||||
EOF
|
||||
|
||||
log_info "Integration templates initialized" "enterprise"
|
||||
}
|
||||
|
||||
# Integration management functions
|
||||
enable_integration() {
|
||||
local integration_name="$1"
|
||||
local config_file="$2"
|
||||
|
||||
if [[ -z "$integration_name" ]]; then
|
||||
log_error "Integration name is required" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate integration name
|
||||
local valid_integration=false
|
||||
for integration in "${SUPPORTED_INTEGRATIONS[@]}"; do
|
||||
if [[ "$integration" == "$integration_name" ]]; then
|
||||
valid_integration=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$valid_integration" != "true" ]]; then
|
||||
log_error "Unsupported integration: $integration_name" "enterprise"
|
||||
log_info "Supported integrations: ${SUPPORTED_INTEGRATIONS[*]}" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
local template_file="$enterprise_base/templates/${integration_name,,}.json"
|
||||
|
||||
# Check if integration template exists
|
||||
if [[ ! -f "$template_file" ]]; then
|
||||
log_error "Integration template not found: $template_file" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Load template
|
||||
local template_data
|
||||
template_data=$(jq -r '.' "$template_file")
|
||||
|
||||
# Merge custom configuration if provided
|
||||
if [[ -n "$config_file" && -f "$config_file" ]]; then
|
||||
if jq empty "$config_file" 2>/dev/null; then
|
||||
template_data=$(jq -s '.[0] * .[1]' <(echo "$template_data") "$config_file")
|
||||
else
|
||||
log_warning "Invalid JSON in integration configuration, using template defaults" "enterprise"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Add integration to database
|
||||
jq --arg name "$integration_name" --argjson data "$template_data" \
|
||||
'.integrations[$name] = $data' "$enterprise_db" > "$enterprise_db.tmp" && mv "$enterprise_db.tmp" "$enterprise_db"
|
||||
|
||||
# Test integration connectivity
|
||||
test_integration_connectivity "$integration_name"
|
||||
|
||||
log_success "Integration '$integration_name' enabled successfully" "enterprise"
|
||||
}
|
||||
|
||||
disable_integration() {
|
||||
local integration_name="$1"
|
||||
|
||||
if [[ -z "$integration_name" ]]; then
|
||||
log_error "Integration name is required" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Remove integration from database
|
||||
jq --arg name "$integration_name" 'del(.integrations[$name])' "$enterprise_db" > "$enterprise_db.tmp" && mv "$enterprise_db.tmp" "$enterprise_db"
|
||||
|
||||
log_success "Integration '$integration_name' disabled successfully" "enterprise"
|
||||
}
|
||||
|
||||
list_integrations() {
|
||||
local format="${1:-table}"
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
if [[ ! -f "$enterprise_db" ]]; then
|
||||
log_error "Enterprise integration database not found" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"json")
|
||||
jq -r '.integrations' "$enterprise_db"
|
||||
;;
|
||||
"csv")
|
||||
echo "integration,type,description,status"
|
||||
jq -r '.integrations | to_entries[] | [.key, .value.type, .value.description, "enabled"] | @csv' "$enterprise_db"
|
||||
;;
|
||||
"table"|*)
|
||||
echo "Enabled Enterprise Integrations:"
|
||||
echo "==============================="
|
||||
jq -r '.integrations | to_entries[] | "\(.key) (\(.value.type)) - \(.value.description)"' "$enterprise_db"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Integration connectivity testing
|
||||
test_integration_connectivity() {
|
||||
local integration_name="$1"
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Get integration configuration
|
||||
local integration_config
|
||||
integration_config=$(jq -r ".integrations[\"$integration_name\"]" "$enterprise_db")
|
||||
|
||||
if [[ "$integration_config" == "null" ]]; then
|
||||
log_error "Integration '$integration_name' not found" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Testing connectivity for integration: $integration_name" "enterprise"
|
||||
|
||||
# Test primary endpoint
|
||||
local primary_endpoint
|
||||
primary_endpoint=$(echo "$integration_config" | jq -r '.endpoints | to_entries[0].value')
|
||||
|
||||
if [[ -n "$primary_endpoint" && "$primary_endpoint" != "null" ]]; then
|
||||
# Test HTTP connectivity
|
||||
if curl -s --connect-timeout 10 --max-time 30 "$primary_endpoint" > /dev/null 2>&1; then
|
||||
log_success "Connectivity test passed for $integration_name" "enterprise"
|
||||
else
|
||||
log_warning "Connectivity test failed for $integration_name" "enterprise"
|
||||
fi
|
||||
else
|
||||
log_info "No primary endpoint configured for $integration_name" "enterprise"
|
||||
fi
|
||||
}
|
||||
|
||||
# Event sending functions
|
||||
send_enterprise_event() {
|
||||
local integration_name="$1"
|
||||
local event_type="$2"
|
||||
local event_data="$3"
|
||||
|
||||
if [[ -z "$integration_name" || -z "$event_type" ]]; then
|
||||
log_error "Integration name and event type are required" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Get integration configuration
|
||||
local integration_config
|
||||
integration_config=$(jq -r ".integrations[\"$integration_name\"]" "$enterprise_db")
|
||||
|
||||
if [[ "$integration_config" == "null" ]]; then
|
||||
log_error "Integration '$integration_name' not found" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if event type is enabled
|
||||
local event_enabled
|
||||
event_enabled=$(echo "$integration_config" | jq -r ".events.$event_type // .triggers.$event_type // false")
|
||||
|
||||
if [[ "$event_enabled" != "true" ]]; then
|
||||
log_debug "Event type '$event_type' not enabled for integration '$integration_name'" "enterprise"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Get endpoint for event type
|
||||
local endpoint
|
||||
case "$event_type" in
|
||||
"layer_created"|"layer_deleted"|"security_scan"|"compliance_scan")
|
||||
endpoint=$(echo "$integration_config" | jq -r '.endpoints.events // .endpoints.alerts')
|
||||
;;
|
||||
"security_incident"|"compliance_violation"|"system_failure")
|
||||
endpoint=$(echo "$integration_config" | jq -r '.endpoints.incidents // .endpoints.alerts')
|
||||
;;
|
||||
*)
|
||||
endpoint=$(echo "$integration_config" | jq -r '.endpoints.events')
|
||||
;;
|
||||
esac
|
||||
|
||||
if [[ -z "$endpoint" || "$endpoint" == "null" ]]; then
|
||||
log_error "No endpoint configured for event type '$event_type'" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Prepare event payload
|
||||
local payload
|
||||
payload=$(prepare_event_payload "$integration_name" "$event_type" "$event_data")
|
||||
|
||||
# Send event
|
||||
send_event_to_integration "$integration_name" "$endpoint" "$payload"
|
||||
}
|
||||
|
||||
prepare_event_payload() {
|
||||
local integration_name="$1"
|
||||
local event_type="$2"
|
||||
local event_data="$3"
|
||||
|
||||
# Base event structure
|
||||
local base_event
|
||||
base_event=$(cat << 'EOF'
|
||||
{
|
||||
"source": "apt-layer",
|
||||
"integration": "$integration_name",
|
||||
"event_type": "$event_type",
|
||||
"timestamp": "$(date -Iseconds)",
|
||||
"version": "1.0"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
# Merge with event data if provided
|
||||
if [[ -n "$event_data" ]]; then
|
||||
if jq empty <(echo "$event_data") 2>/dev/null; then
|
||||
echo "$base_event" | jq --argjson data "$event_data" '. + $data'
|
||||
else
|
||||
echo "$base_event" | jq --arg data "$event_data" '. + {"message": $data}'
|
||||
fi
|
||||
else
|
||||
echo "$base_event"
|
||||
fi
|
||||
}
|
||||
|
||||
send_event_to_integration() {
|
||||
local integration_name="$1"
|
||||
local endpoint="$2"
|
||||
local payload="$3"
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Get integration configuration
|
||||
local integration_config
|
||||
integration_config=$(jq -r ".integrations[\"$integration_name\"]" "$enterprise_db")
|
||||
|
||||
# Get authentication details
|
||||
local auth_type
|
||||
auth_type=$(echo "$integration_config" | jq -r '.authentication.type')
|
||||
|
||||
# Prepare curl command
|
||||
local curl_cmd="curl -s --connect-timeout $ENTERPRISE_INTEGRATION_TIMEOUT --max-time $ENTERPRISE_INTEGRATION_TIMEOUT"
|
||||
|
||||
# Add authentication
|
||||
case "$auth_type" in
|
||||
"api_key")
|
||||
local api_key
|
||||
api_key=$(echo "$integration_config" | jq -r '.authentication.header // "X-API-Key"')
|
||||
local key_value
|
||||
key_value=$(echo "$integration_config" | jq -r '.authentication.key')
|
||||
curl_cmd="$curl_cmd -H \"$api_key: $key_value\""
|
||||
;;
|
||||
"basic_auth")
|
||||
local username
|
||||
username=$(echo "$integration_config" | jq -r '.authentication.username')
|
||||
local password
|
||||
password=$(echo "$integration_config" | jq -r '.authentication.password')
|
||||
curl_cmd="$curl_cmd -u \"$username:$password\""
|
||||
;;
|
||||
"bearer_token")
|
||||
local token
|
||||
token=$(echo "$integration_config" | jq -r '.authentication.token')
|
||||
curl_cmd="$curl_cmd -H \"Authorization: Bearer $token\""
|
||||
;;
|
||||
"oauth2")
|
||||
local client_id
|
||||
client_id=$(echo "$integration_config" | jq -r '.authentication.client_id')
|
||||
local client_secret
|
||||
client_secret=$(echo "$integration_config" | jq -r '.authentication.client_secret')
|
||||
curl_cmd="$curl_cmd -H \"X-Client-ID: $client_id\" -H \"X-Client-Secret: $client_secret\""
|
||||
;;
|
||||
esac
|
||||
|
||||
# Add headers and send
|
||||
curl_cmd="$curl_cmd -H \"Content-Type: application/json\" -X POST -d '$payload' \"$endpoint\""
|
||||
|
||||
# Send with retry logic
|
||||
local retry_count=0
|
||||
local max_retries
|
||||
max_retries=$(echo "$integration_config" | jq -r '.retry_policy.max_retries // 3')
|
||||
|
||||
while [[ $retry_count -lt $max_retries ]]; do
|
||||
local response
|
||||
response=$(eval "$curl_cmd")
|
||||
local exit_code=$?
|
||||
|
||||
if [[ $exit_code -eq 0 ]]; then
|
||||
log_debug "Event sent successfully to $integration_name" "enterprise"
|
||||
return 0
|
||||
else
|
||||
retry_count=$((retry_count + 1))
|
||||
if [[ $retry_count -lt $max_retries ]]; then
|
||||
local backoff
|
||||
backoff=$(echo "$integration_config" | jq -r '.retry_policy.backoff_multiplier // 2')
|
||||
local wait_time=$((retry_count * backoff))
|
||||
log_warning "Event send failed, retrying in ${wait_time}s (attempt $retry_count/$max_retries)" "enterprise"
|
||||
sleep "$wait_time"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
log_error "Failed to send event to $integration_name after $max_retries attempts" "enterprise"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Hook management functions
|
||||
register_hook() {
|
||||
local hook_name="$1"
|
||||
local hook_script="$2"
|
||||
local event_types="$3"
|
||||
|
||||
if [[ -z "$hook_name" || -z "$hook_script" ]]; then
|
||||
log_error "Hook name and script are required" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local hooks_dir="$enterprise_base/hooks"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Create hook file
|
||||
local hook_file="$hooks_dir/$hook_name.sh"
|
||||
cat > "$hook_file" << EOF
|
||||
#!/bin/bash
|
||||
# Enterprise Integration Hook: $hook_name
|
||||
# Event Types: $event_types
|
||||
|
||||
$hook_script
|
||||
EOF
|
||||
|
||||
chmod +x "$hook_file"
|
||||
|
||||
# Register hook in database
|
||||
jq --arg name "$hook_name" --arg script "$hook_file" --arg events "$event_types" \
|
||||
'.hooks[$name] = {"script": $script, "events": $events, "enabled": true}' "$enterprise_db" > "$enterprise_db.tmp" && mv "$enterprise_db.tmp" "$enterprise_db"
|
||||
|
||||
log_success "Hook '$hook_name' registered successfully" "enterprise"
|
||||
}
|
||||
|
||||
unregister_hook() {
|
||||
local hook_name="$1"
|
||||
|
||||
if [[ -z "$hook_name" ]]; then
|
||||
log_error "Hook name is required" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local hooks_dir="$enterprise_base/hooks"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Remove hook file
|
||||
local hook_file="$hooks_dir/$hook_name.sh"
|
||||
if [[ -f "$hook_file" ]]; then
|
||||
rm -f "$hook_file"
|
||||
fi
|
||||
|
||||
# Remove from database
|
||||
jq --arg name "$hook_name" 'del(.hooks[$name])' "$enterprise_db" > "$enterprise_db.tmp" && mv "$enterprise_db.tmp" "$enterprise_db"
|
||||
|
||||
log_success "Hook '$hook_name' unregistered successfully" "enterprise"
|
||||
}
|
||||
|
||||
list_hooks() {
|
||||
local format="${1:-table}"
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
if [[ ! -f "$enterprise_db" ]]; then
|
||||
log_error "Enterprise integration database not found" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"json")
|
||||
jq -r '.hooks' "$enterprise_db"
|
||||
;;
|
||||
"csv")
|
||||
echo "hook_name,script,events,enabled"
|
||||
jq -r '.hooks | to_entries[] | [.key, .value.script, .value.events, .value.enabled] | @csv' "$enterprise_db"
|
||||
;;
|
||||
"table"|*)
|
||||
echo "Registered Enterprise Hooks:"
|
||||
echo "============================"
|
||||
jq -r '.hooks | to_entries[] | "\(.key) - \(.value.events) (\(.value.enabled))"' "$enterprise_db"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Hook execution
|
||||
execute_hooks() {
|
||||
local event_type="$1"
|
||||
local event_data="$2"
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Get hooks for this event type
|
||||
local hooks
|
||||
hooks=$(jq -r ".hooks | to_entries[] | select(.value.events | contains(\"$event_type\")) | .key" "$enterprise_db")
|
||||
|
||||
if [[ -z "$hooks" ]]; then
|
||||
log_debug "No hooks registered for event type: $event_type" "enterprise"
|
||||
return 0
|
||||
fi
|
||||
|
||||
while IFS= read -r hook_name; do
|
||||
if [[ -n "$hook_name" ]]; then
|
||||
execute_single_hook "$hook_name" "$event_type" "$event_data"
|
||||
fi
|
||||
done <<< "$hooks"
|
||||
}
|
||||
|
||||
execute_single_hook() {
|
||||
local hook_name="$1"
|
||||
local event_type="$2"
|
||||
local event_data="$3"
|
||||
|
||||
local enterprise_base="${WORKSPACE}/enterprise"
|
||||
local enterprise_db="$enterprise_base/integrations.json"
|
||||
|
||||
# Get hook configuration
|
||||
local hook_config
|
||||
hook_config=$(jq -r ".hooks[\"$hook_name\"]" "$enterprise_db")
|
||||
|
||||
if [[ "$hook_config" == "null" ]]; then
|
||||
log_error "Hook '$hook_name' not found" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local enabled
|
||||
enabled=$(echo "$hook_config" | jq -r '.enabled')
|
||||
|
||||
if [[ "$enabled" != "true" ]]; then
|
||||
log_debug "Hook '$hook_name' is disabled" "enterprise"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local script_path
|
||||
script_path=$(echo "$hook_config" | jq -r '.script')
|
||||
|
||||
if [[ ! -f "$script_path" ]]; then
|
||||
log_error "Hook script not found: $script_path" "enterprise"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Execute hook with environment variables
|
||||
log_debug "Executing hook: $hook_name" "enterprise"
|
||||
|
||||
export APT_LAYER_EVENT_TYPE="$event_type"
|
||||
export APT_LAYER_EVENT_DATA="$event_data"
|
||||
export APT_LAYER_WORKSPACE="$WORKSPACE"
|
||||
|
||||
if bash "$script_path"; then
|
||||
log_debug "Hook '$hook_name' executed successfully" "enterprise"
|
||||
else
|
||||
log_error "Hook '$hook_name' execution failed" "enterprise"
|
||||
fi
|
||||
}
|
||||
|
||||
# Enterprise integration command handler
|
||||
handle_enterprise_integration_command() {
|
||||
local command="$1"
|
||||
shift
|
||||
|
||||
case "$command" in
|
||||
"init")
|
||||
init_enterprise_integration
|
||||
;;
|
||||
"enable")
|
||||
local integration_name="$1"
|
||||
local config_file="$2"
|
||||
enable_integration "$integration_name" "$config_file"
|
||||
;;
|
||||
"disable")
|
||||
local integration_name="$1"
|
||||
disable_integration "$integration_name"
|
||||
;;
|
||||
"list")
|
||||
local format="$1"
|
||||
list_integrations "$format"
|
||||
;;
|
||||
"test")
|
||||
local integration_name="$1"
|
||||
test_integration_connectivity "$integration_name"
|
||||
;;
|
||||
"hook")
|
||||
local hook_command="$1"
|
||||
shift
|
||||
case "$hook_command" in
|
||||
"register")
|
||||
local hook_name="$1"
|
||||
local hook_script="$2"
|
||||
local event_types="$3"
|
||||
register_hook "$hook_name" "$hook_script" "$event_types"
|
||||
;;
|
||||
"unregister")
|
||||
local hook_name="$1"
|
||||
unregister_hook "$hook_name"
|
||||
;;
|
||||
"list")
|
||||
local format="$1"
|
||||
list_hooks "$format"
|
||||
;;
|
||||
*)
|
||||
echo "Hook commands: register, unregister, list"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"send")
|
||||
local integration_name="$1"
|
||||
local event_type="$2"
|
||||
local event_data="$3"
|
||||
send_enterprise_event "$integration_name" "$event_type" "$event_data"
|
||||
;;
|
||||
"help"|*)
|
||||
echo "Enterprise Integration Commands:"
|
||||
echo "==============================="
|
||||
echo " init - Initialize enterprise integration system"
|
||||
echo " enable <integration> [config_file] - Enable enterprise integration"
|
||||
echo " disable <integration> - Disable enterprise integration"
|
||||
echo " list [format] - List enabled integrations (json|csv|table)"
|
||||
echo " test <integration> - Test integration connectivity"
|
||||
echo " hook register <name> <script> <events> - Register custom hook"
|
||||
echo " hook unregister <name> - Unregister hook"
|
||||
echo " hook list [format] - List registered hooks"
|
||||
echo " send <integration> <event> [data] - Send event to integration"
|
||||
echo " help - Show this help"
|
||||
echo ""
|
||||
echo "Supported Integrations:"
|
||||
echo " SIEM, TICKETING, MONITORING, CMDB, BACKUP, SECURITY, COMPLIANCE, DEVOPS, CLOUD, CUSTOM"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
|
@ -1,779 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Advanced Monitoring & Alerting for apt-layer
|
||||
# Provides real-time and scheduled monitoring, customizable alerting, and integration with enterprise monitoring platforms
|
||||
|
||||
# Monitoring & alerting configuration
|
||||
MONITORING_ENABLED="${MONITORING_ENABLED:-true}"
|
||||
ALERTING_ENABLED="${ALERTING_ENABLED:-true}"
|
||||
MONITORING_INTERVAL="${MONITORING_INTERVAL:-300}"
|
||||
ALERT_HISTORY_LIMIT="${ALERT_HISTORY_LIMIT:-1000}"
|
||||
|
||||
# Thresholds (configurable)
|
||||
CPU_THRESHOLD="${CPU_THRESHOLD:-2.0}"
|
||||
CPU_THRESHOLD_5="${CPU_THRESHOLD_5:-2.0}"
|
||||
CPU_THRESHOLD_15="${CPU_THRESHOLD_15:-1.5}"
|
||||
MEM_THRESHOLD="${MEM_THRESHOLD:-100000}"
|
||||
SWAP_THRESHOLD="${SWAP_THRESHOLD:-50000}"
|
||||
DISK_THRESHOLD="${DISK_THRESHOLD:-500000}"
|
||||
INODE_THRESHOLD="${INODE_THRESHOLD:-1000}"
|
||||
DISK_IOWAIT_THRESHOLD="${DISK_IOWAIT_THRESHOLD:-10.0}"
|
||||
LAYER_COUNT_THRESHOLD="${LAYER_COUNT_THRESHOLD:-100}"
|
||||
TENANT_COUNT_THRESHOLD="${TENANT_COUNT_THRESHOLD:-10}"
|
||||
UPTIME_MAX_DAYS="${UPTIME_MAX_DAYS:-180}"
|
||||
|
||||
# Key processes to check (comma-separated)
|
||||
MONITOR_PROCESSES="${MONITOR_PROCESSES:-composefs-alternative.sh,containerd,podman,docker}"
|
||||
|
||||
# Supported alert channels
|
||||
SUPPORTED_ALERT_CHANNELS=(
|
||||
"EMAIL" # Email notifications
|
||||
"WEBHOOK" # Webhook notifications
|
||||
"SIEM" # Security Information and Event Management
|
||||
"PROMETHEUS" # Prometheus metrics
|
||||
"GRAFANA" # Grafana alerting
|
||||
"SLACK" # Slack notifications
|
||||
"TEAMS" # Microsoft Teams
|
||||
"CUSTOM" # Custom scripts/hooks
|
||||
)
|
||||
|
||||
# Monitoring agent initialization
|
||||
init_monitoring_agent() {
|
||||
log_info "Initializing monitoring and alerting system..." "monitoring"
|
||||
|
||||
local monitoring_base="${WORKSPACE}/monitoring"
|
||||
mkdir -p "$monitoring_base"
|
||||
mkdir -p "$monitoring_base/alerts"
|
||||
mkdir -p "$monitoring_base/history"
|
||||
mkdir -p "$monitoring_base/policies"
|
||||
mkdir -p "$monitoring_base/integrations"
|
||||
|
||||
# Initialize alert history
|
||||
local alert_history="$monitoring_base/alert-history.json"
|
||||
if [[ ! -f "$alert_history" ]]; then
|
||||
echo '{"alerts":[]}' > "$alert_history"
|
||||
fi
|
||||
|
||||
log_success "Monitoring and alerting system initialized" "monitoring"
|
||||
}
|
||||
|
||||
# Monitoring functions
|
||||
run_monitoring_checks() {
|
||||
log_info "Running monitoring checks..." "monitoring"
|
||||
check_system_health
|
||||
check_layer_health
|
||||
check_tenant_health
|
||||
check_security_status
|
||||
check_compliance_status
|
||||
log_success "Monitoring checks completed" "monitoring"
|
||||
}
|
||||
|
||||
check_system_health() {
|
||||
# CPU Load (1, 5, 15 min)
|
||||
local cpu_load1 cpu_load5 cpu_load15
|
||||
read cpu_load1 cpu_load5 cpu_load15 _ < /proc/loadavg
|
||||
# Memory
|
||||
local mem_free swap_free
|
||||
mem_free=$(awk '/MemFree/ {print $2}' /proc/meminfo)
|
||||
swap_free=$(awk '/SwapFree/ {print $2}' /proc/meminfo)
|
||||
# Disk
|
||||
local disk_free
|
||||
disk_free=$(df / | awk 'NR==2 {print $4}')
|
||||
# Inodes
|
||||
local inode_free
|
||||
inode_free=$(df -i / | awk 'NR==2 {print $4}')
|
||||
# Uptime
|
||||
local uptime_sec uptime_days
|
||||
uptime_sec=$(awk '{print $1}' /proc/uptime)
|
||||
uptime_days=$(awk -v s="$uptime_sec" 'BEGIN {print int(s/86400)}')
|
||||
# Disk I/O wait (stub, extend with iostat if available)
|
||||
local disk_iowait="0.0"
|
||||
if command -v iostat >/dev/null 2>&1; then
|
||||
disk_iowait=$(iostat -c 1 2 | awk '/^ /{print $4}' | tail -1)
|
||||
fi
|
||||
# Process health
|
||||
IFS=',' read -ra procs <<< "$MONITOR_PROCESSES"
|
||||
for proc in "${procs[@]}"; do
|
||||
if ! pgrep -x "$proc" >/dev/null 2>&1; then
|
||||
trigger_alert "system" "Critical process not running: $proc" "critical"
|
||||
fi
|
||||
done
|
||||
# Threshold checks
|
||||
if (( $(echo "$cpu_load1 > $CPU_THRESHOLD" | bc -l) )); then
|
||||
trigger_alert "system" "High 1-min CPU load: $cpu_load1" "critical"
|
||||
fi
|
||||
if (( $(echo "$cpu_load5 > $CPU_THRESHOLD_5" | bc -l) )); then
|
||||
trigger_alert "system" "High 5-min CPU load: $cpu_load5" "warning"
|
||||
fi
|
||||
if (( $(echo "$cpu_load15 > $CPU_THRESHOLD_15" | bc -l) )); then
|
||||
trigger_alert "system" "High 15-min CPU load: $cpu_load15" "info"
|
||||
fi
|
||||
if (( mem_free < MEM_THRESHOLD )); then
|
||||
trigger_alert "system" "Low memory: $mem_free kB" "warning"
|
||||
fi
|
||||
if (( swap_free < SWAP_THRESHOLD )); then
|
||||
trigger_alert "system" "Low swap: $swap_free kB" "warning"
|
||||
fi
|
||||
if (( disk_free < DISK_THRESHOLD )); then
|
||||
trigger_alert "system" "Low disk space: $disk_free kB" "warning"
|
||||
fi
|
||||
if (( inode_free < INODE_THRESHOLD )); then
|
||||
trigger_alert "system" "Low inode count: $inode_free" "warning"
|
||||
fi
|
||||
if (( $(echo "$disk_iowait > $DISK_IOWAIT_THRESHOLD" | bc -l) )); then
|
||||
trigger_alert "system" "High disk I/O wait: $disk_iowait%" "warning"
|
||||
fi
|
||||
if (( uptime_days > UPTIME_MAX_DAYS )); then
|
||||
trigger_alert "system" "System uptime exceeds $UPTIME_MAX_DAYS days: $uptime_days days" "info"
|
||||
fi
|
||||
# TODO: Add more enterprise checks (network, kernel, hardware, etc.)
|
||||
}
|
||||
|
||||
check_layer_health() {
|
||||
# Layer count
|
||||
local layer_count
|
||||
layer_count=$(find "${WORKSPACE}/layers" -maxdepth 1 -type d 2>/dev/null | wc -l)
|
||||
if (( layer_count > LAYER_COUNT_THRESHOLD )); then
|
||||
trigger_alert "layer" "Layer count exceeds $LAYER_COUNT_THRESHOLD: $layer_count" "info"
|
||||
fi
|
||||
# TODO: Add failed/unhealthy layer detection, stale layer checks
|
||||
}
|
||||
|
||||
check_tenant_health() {
|
||||
local tenant_dir="${WORKSPACE}/tenants"
|
||||
if [[ -d "$tenant_dir" ]]; then
|
||||
local tenant_count
|
||||
tenant_count=$(find "$tenant_dir" -maxdepth 1 -type d 2>/dev/null | wc -l)
|
||||
if (( tenant_count > TENANT_COUNT_THRESHOLD )); then
|
||||
trigger_alert "tenant" "Tenant count exceeds $TENANT_COUNT_THRESHOLD: $tenant_count" "info"
|
||||
fi
|
||||
# TODO: Add quota usage, unhealthy tenant, cross-tenant contention checks
|
||||
fi
|
||||
}
|
||||
|
||||
check_security_status() {
|
||||
# Security scan failures
|
||||
local security_status_file="${WORKSPACE}/security/last-scan.json"
|
||||
if [[ -f "$security_status_file" ]]; then
|
||||
local failed
|
||||
failed=$(jq -r '.failed // 0' "$security_status_file")
|
||||
if (( failed > 0 )); then
|
||||
trigger_alert "security" "Security scan failures: $failed" "critical"
|
||||
fi
|
||||
fi
|
||||
# TODO: Add vulnerability count/severity, policy violation checks
|
||||
}
|
||||
|
||||
check_compliance_status() {
|
||||
# Compliance scan failures
|
||||
local compliance_status_file="${WORKSPACE}/compliance/last-scan.json"
|
||||
if [[ -f "$compliance_status_file" ]]; then
|
||||
local failed
|
||||
failed=$(jq -r '.summary.failed // 0' "$compliance_status_file")
|
||||
if (( failed > 0 )); then
|
||||
trigger_alert "compliance" "Compliance scan failures: $failed" "critical"
|
||||
fi
|
||||
fi
|
||||
# TODO: Add control failure severity, audit log gap checks
|
||||
}
|
||||
|
||||
# Alerting functions
|
||||
trigger_alert() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp
|
||||
timestamp=$(date -Iseconds)
|
||||
|
||||
log_warning "ALERT [$severity] from $source: $message" "monitoring"
|
||||
|
||||
# Record alert in history
|
||||
record_alert_history "$source" "$message" "$severity" "$timestamp"
|
||||
|
||||
# Dispatch alert
|
||||
dispatch_alert "$source" "$message" "$severity" "$timestamp"
|
||||
}
|
||||
|
||||
record_alert_history() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp="$4"
|
||||
local monitoring_base="${WORKSPACE}/monitoring"
|
||||
local alert_history="$monitoring_base/alert-history.json"
|
||||
|
||||
# Add alert to history (limit to ALERT_HISTORY_LIMIT)
|
||||
local new_alert
|
||||
new_alert=$(jq -n --arg source "$source" --arg message "$message" --arg severity "$severity" --arg timestamp "$timestamp" '{source:$source,message:$message,severity:$severity,timestamp:$timestamp}')
|
||||
local updated_history
|
||||
updated_history=$(jq --argjson alert "$new_alert" '.alerts += [$alert] | .alerts |= (.[-'$ALERT_HISTORY_LIMIT':])' "$alert_history")
|
||||
echo "$updated_history" > "$alert_history"
|
||||
}
|
||||
|
||||
# Alert dispatch functions
|
||||
dispatch_alert() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp="$4"
|
||||
|
||||
# Check if alert should be suppressed
|
||||
if is_alert_suppressed "$source" "$message" "$severity"; then
|
||||
log_debug "Alert suppressed: $source - $message" "monitoring"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check for correlation and grouping
|
||||
local correlation_key
|
||||
correlation_key=$(generate_correlation_key "$source" "$message" "$severity")
|
||||
|
||||
if is_correlated_alert "$correlation_key"; then
|
||||
log_debug "Correlated alert, updating existing: $correlation_key" "monitoring"
|
||||
update_correlated_alert "$correlation_key" "$message" "$timestamp"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Dispatch to all configured channels
|
||||
dispatch_to_email "$source" "$message" "$severity" "$timestamp"
|
||||
dispatch_to_webhook "$source" "$message" "$severity" "$timestamp"
|
||||
dispatch_to_siem "$source" "$message" "$severity" "$timestamp"
|
||||
dispatch_to_prometheus "$source" "$message" "$severity" "$timestamp"
|
||||
dispatch_to_custom "$source" "$message" "$severity" "$timestamp"
|
||||
}
|
||||
|
||||
# Alert suppression
|
||||
is_alert_suppressed() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
|
||||
# Check suppression policies
|
||||
local suppression_file="${WORKSPACE}/monitoring/policies/suppression.json"
|
||||
if [[ -f "$suppression_file" ]]; then
|
||||
# Check if this alert matches any suppression rules
|
||||
local suppressed
|
||||
suppressed=$(jq -r --arg source "$source" --arg severity "$severity" '.rules[] | select(.source == $source and .severity == $severity) | .suppressed' "$suppression_file" 2>/dev/null || echo "false")
|
||||
if [[ "$suppressed" == "true" ]]; then
|
||||
return 0 # Suppressed
|
||||
fi
|
||||
fi
|
||||
|
||||
return 1 # Not suppressed
|
||||
}
|
||||
|
||||
# Event correlation
|
||||
generate_correlation_key() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
|
||||
# Generate a correlation key based on source and message pattern
|
||||
echo "${source}:${severity}:$(echo "$message" | sed 's/[0-9]*//g' | tr '[:upper:]' '[:lower:]' | tr -d '[:punct:]' | tr -s ' ')"
|
||||
}
|
||||
|
||||
is_correlated_alert() {
|
||||
local correlation_key="$1"
|
||||
local correlation_file="${WORKSPACE}/monitoring/correlation.json"
|
||||
|
||||
if [[ -f "$correlation_file" ]]; then
|
||||
jq -e --arg key "$correlation_key" '.correlations[$key]' "$correlation_file" >/dev/null 2>&1
|
||||
return $?
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
update_correlated_alert() {
|
||||
local correlation_key="$1"
|
||||
local message="$2"
|
||||
local timestamp="$3"
|
||||
local correlation_file="${WORKSPACE}/monitoring/correlation.json"
|
||||
|
||||
# Update correlation data
|
||||
local correlation_data
|
||||
correlation_data=$(jq --arg key "$correlation_key" --arg message "$message" --arg timestamp "$timestamp" \
|
||||
'.correlations[$key] += {"count": (.correlations[$key].count // 0) + 1, "last_seen": $timestamp, "last_message": $message}' \
|
||||
"$correlation_file" 2>/dev/null || echo '{"correlations":{}}')
|
||||
echo "$correlation_data" > "$correlation_file"
|
||||
}
|
||||
|
||||
# Alert dispatch to different channels
|
||||
dispatch_to_email() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp="$4"
|
||||
|
||||
# Check if email alerts are enabled
|
||||
if [[ "${EMAIL_ALERTS_ENABLED:-false}" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local email_config="${WORKSPACE}/monitoring/config/email.json"
|
||||
if [[ ! -f "$email_config" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local smtp_server
|
||||
smtp_server=$(jq -r '.smtp_server' "$email_config")
|
||||
local from_email
|
||||
from_email=$(jq -r '.from_email' "$email_config")
|
||||
local to_emails
|
||||
to_emails=$(jq -r '.to_emails[]' "$email_config")
|
||||
|
||||
if [[ -z "$smtp_server" || -z "$from_email" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Create email content
|
||||
local subject="[ALERT] $severity - $source"
|
||||
local body="Alert Details:
|
||||
Source: $source
|
||||
Severity: $severity
|
||||
Message: $message
|
||||
Timestamp: $timestamp
|
||||
Hostname: $(hostname)"
|
||||
|
||||
# Send email (stub - implement with mail command or curl)
|
||||
log_debug "Sending email alert to: $to_emails" "monitoring"
|
||||
# echo "$body" | mail -s "$subject" -r "$from_email" "$to_emails"
|
||||
}
|
||||
|
||||
dispatch_to_webhook() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp="$4"
|
||||
|
||||
# Check if webhook alerts are enabled
|
||||
if [[ "${WEBHOOK_ALERTS_ENABLED:-false}" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local webhook_config="${WORKSPACE}/monitoring/config/webhook.json"
|
||||
if [[ ! -f "$webhook_config" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local webhook_url
|
||||
webhook_url=$(jq -r '.url' "$webhook_config")
|
||||
local auth_token
|
||||
auth_token=$(jq -r '.auth_token // empty' "$webhook_config")
|
||||
|
||||
if [[ -z "$webhook_url" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Create webhook payload
|
||||
local payload
|
||||
payload=$(jq -n \
|
||||
--arg source "$source" \
|
||||
--arg message "$message" \
|
||||
--arg severity "$severity" \
|
||||
--arg timestamp "$timestamp" \
|
||||
--arg hostname "$(hostname)" \
|
||||
'{
|
||||
"source": $source,
|
||||
"message": $message,
|
||||
"severity": $severity,
|
||||
"timestamp": $timestamp,
|
||||
"hostname": $hostname
|
||||
}')
|
||||
|
||||
# Send webhook
|
||||
local curl_cmd="curl -s --connect-timeout 10 --max-time 30 -X POST -H 'Content-Type: application/json'"
|
||||
if [[ -n "$auth_token" ]]; then
|
||||
curl_cmd="$curl_cmd -H 'Authorization: Bearer $auth_token'"
|
||||
fi
|
||||
curl_cmd="$curl_cmd -d '$payload' '$webhook_url'"
|
||||
|
||||
log_debug "Sending webhook alert to: $webhook_url" "monitoring"
|
||||
eval "$curl_cmd" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
dispatch_to_siem() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp="$4"
|
||||
|
||||
# Use enterprise integration if available
|
||||
if command -v send_enterprise_event >/dev/null 2>&1; then
|
||||
local event_data
|
||||
event_data=$(jq -n \
|
||||
--arg source "$source" \
|
||||
--arg message "$message" \
|
||||
--arg severity "$severity" \
|
||||
--arg timestamp "$timestamp" \
|
||||
'{
|
||||
"source": $source,
|
||||
"message": $message,
|
||||
"severity": $severity,
|
||||
"timestamp": $timestamp
|
||||
}')
|
||||
|
||||
send_enterprise_event "SIEM" "alert" "$event_data"
|
||||
fi
|
||||
}
|
||||
|
||||
dispatch_to_prometheus() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp="$4"
|
||||
|
||||
# Check if Prometheus metrics are enabled
|
||||
if [[ "${PROMETHEUS_METRICS_ENABLED:-false}" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local prometheus_config="${WORKSPACE}/monitoring/config/prometheus.json"
|
||||
if [[ ! -f "$prometheus_config" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local pushgateway_url
|
||||
pushgateway_url=$(jq -r '.pushgateway_url' "$prometheus_config")
|
||||
|
||||
if [[ -z "$pushgateway_url" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Create Prometheus metric
|
||||
local metric_name="apt_layer_alert"
|
||||
local metric_value="1"
|
||||
local labels="source=\"$source\",severity=\"$severity\""
|
||||
|
||||
# Send to Pushgateway
|
||||
local metric_data="$metric_name{$labels} $metric_value"
|
||||
echo "$metric_data" | curl -s --data-binary @- "$pushgateway_url/metrics/job/apt_layer/instance/$(hostname)" >/dev/null 2>&1
|
||||
|
||||
log_debug "Sent Prometheus metric: $metric_data" "monitoring"
|
||||
}
|
||||
|
||||
dispatch_to_custom() {
|
||||
local source="$1"
|
||||
local message="$2"
|
||||
local severity="$3"
|
||||
local timestamp="$4"
|
||||
|
||||
# Execute custom alert scripts
|
||||
local custom_scripts_dir="${WORKSPACE}/monitoring/scripts"
|
||||
if [[ -d "$custom_scripts_dir" ]]; then
|
||||
for script in "$custom_scripts_dir"/*.sh; do
|
||||
if [[ -f "$script" && -x "$script" ]]; then
|
||||
export ALERT_SOURCE="$source"
|
||||
export ALERT_MESSAGE="$message"
|
||||
export ALERT_SEVERITY="$severity"
|
||||
export ALERT_TIMESTAMP="$timestamp"
|
||||
|
||||
log_debug "Executing custom alert script: $script" "monitoring"
|
||||
bash "$script" >/dev/null 2>&1
|
||||
fi
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
# Policy management
|
||||
create_alert_policy() {
|
||||
local policy_name="$1"
|
||||
local policy_file="$2"
|
||||
|
||||
if [[ -z "$policy_name" || -z "$policy_file" ]]; then
|
||||
log_error "Policy name and file are required" "monitoring"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local policies_dir="${WORKSPACE}/monitoring/policies"
|
||||
local policy_path="$policies_dir/$policy_name.json"
|
||||
|
||||
# Copy policy file
|
||||
if [[ -f "$policy_file" ]]; then
|
||||
cp "$policy_file" "$policy_path"
|
||||
log_success "Alert policy '$policy_name' created" "monitoring"
|
||||
else
|
||||
log_error "Policy file not found: $policy_file" "monitoring"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
list_alert_policies() {
|
||||
local format="${1:-table}"
|
||||
local policies_dir="${WORKSPACE}/monitoring/policies"
|
||||
|
||||
if [[ ! -d "$policies_dir" ]]; then
|
||||
log_error "Policies directory not found" "monitoring"
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"json")
|
||||
echo "{\"policies\":["
|
||||
local first=true
|
||||
for policy in "$policies_dir"/*.json; do
|
||||
if [[ -f "$policy" ]]; then
|
||||
if [[ "$first" == "true" ]]; then
|
||||
first=false
|
||||
else
|
||||
echo ","
|
||||
fi
|
||||
jq -r '.' "$policy"
|
||||
fi
|
||||
done
|
||||
echo "]}"
|
||||
;;
|
||||
"csv")
|
||||
echo "policy_name,file_path,last_modified"
|
||||
for policy in "$policies_dir"/*.json; do
|
||||
if [[ -f "$policy" ]]; then
|
||||
local policy_name
|
||||
policy_name=$(basename "$policy" .json)
|
||||
local last_modified
|
||||
last_modified=$(stat -c %y "$policy" 2>/dev/null || echo "unknown")
|
||||
echo "$policy_name,$policy,$last_modified"
|
||||
fi
|
||||
done
|
||||
;;
|
||||
"table"|*)
|
||||
echo "Alert Policies:"
|
||||
echo "==============="
|
||||
for policy in "$policies_dir"/*.json; do
|
||||
if [[ -f "$policy" ]]; then
|
||||
local policy_name
|
||||
policy_name=$(basename "$policy" .json)
|
||||
echo "- $policy_name"
|
||||
fi
|
||||
done
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Alert history and reporting
|
||||
query_alert_history() {
|
||||
local source="$1"
|
||||
local severity="$2"
|
||||
local days="$3"
|
||||
local format="${4:-table}"
|
||||
|
||||
local monitoring_base="${WORKSPACE}/monitoring"
|
||||
local alert_history="$monitoring_base/alert-history.json"
|
||||
|
||||
if [[ ! -f "$alert_history" ]]; then
|
||||
log_error "Alert history not found" "monitoring"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Build jq filter
|
||||
local filter=".alerts"
|
||||
if [[ -n "$source" ]]; then
|
||||
filter="$filter | map(select(.source == \"$source\"))"
|
||||
fi
|
||||
if [[ -n "$severity" ]]; then
|
||||
filter="$filter | map(select(.severity == \"$severity\"))"
|
||||
fi
|
||||
if [[ -n "$days" ]]; then
|
||||
local cutoff_date
|
||||
cutoff_date=$(date -d "$days days ago" -Iseconds)
|
||||
filter="$filter | map(select(.timestamp >= \"$cutoff_date\"))"
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"json")
|
||||
jq -r "$filter" "$alert_history"
|
||||
;;
|
||||
"csv")
|
||||
echo "source,severity,message,timestamp"
|
||||
jq -r "$filter | .[] | [.source, .severity, .message, .timestamp] | @csv" "$alert_history"
|
||||
;;
|
||||
"table"|*)
|
||||
echo "Alert History:"
|
||||
echo "=============="
|
||||
jq -r "$filter | .[] | \"[\(.severity)] \(.source): \(.message) (\(.timestamp))\"" "$alert_history"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
generate_alert_report() {
|
||||
local report_period="${1:-daily}"
|
||||
local output_format="${2:-html}"
|
||||
|
||||
local monitoring_base="${WORKSPACE}/monitoring"
|
||||
local alert_history="$monitoring_base/alert-history.json"
|
||||
local report_file="$monitoring_base/reports/alert-report-$(date +%Y%m%d).$output_format"
|
||||
|
||||
if [[ ! -f "$alert_history" ]]; then
|
||||
log_error "Alert history not found" "monitoring"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Calculate report period
|
||||
local start_date
|
||||
case "$report_period" in
|
||||
"hourly")
|
||||
start_date=$(date -d "1 hour ago" -Iseconds)
|
||||
;;
|
||||
"daily")
|
||||
start_date=$(date -d "1 day ago" -Iseconds)
|
||||
;;
|
||||
"weekly")
|
||||
start_date=$(date -d "1 week ago" -Iseconds)
|
||||
;;
|
||||
"monthly")
|
||||
start_date=$(date -d "1 month ago" -Iseconds)
|
||||
;;
|
||||
*)
|
||||
start_date=$(date -d "1 day ago" -Iseconds)
|
||||
;;
|
||||
esac
|
||||
|
||||
# Generate report
|
||||
case "$output_format" in
|
||||
"json")
|
||||
jq --arg start_date "$start_date" \
|
||||
'.alerts | map(select(.timestamp >= $start_date)) | group_by(.severity) | map({severity: .[0].severity, count: length, alerts: .})' \
|
||||
"$alert_history" > "$report_file"
|
||||
;;
|
||||
"html")
|
||||
generate_html_alert_report "$start_date" "$report_file"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported output format: $output_format" "monitoring"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
log_success "Alert report generated: $report_file" "monitoring"
|
||||
}
|
||||
|
||||
generate_html_alert_report() {
|
||||
local start_date="$1"
|
||||
local report_file="$2"
|
||||
local monitoring_base="${WORKSPACE}/monitoring"
|
||||
local alert_history="$monitoring_base/alert-history.json"
|
||||
|
||||
# Get alert data
|
||||
local alert_data
|
||||
alert_data=$(jq --arg start_date "$start_date" \
|
||||
'.alerts | map(select(.timestamp >= $start_date)) | group_by(.severity) | map({severity: .[0].severity, count: length, alerts: .})' \
|
||||
"$alert_history")
|
||||
|
||||
# Generate HTML
|
||||
cat > "$report_file" << EOF
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Alert Report - $(date)</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.header { background-color: #f0f0f0; padding: 20px; border-radius: 5px; }
|
||||
.summary { margin: 20px 0; }
|
||||
.severity { margin: 10px 0; padding: 10px; border: 1px solid #ddd; border-radius: 3px; }
|
||||
.critical { background-color: #f8d7da; border-color: #f5c6cb; }
|
||||
.warning { background-color: #fff3cd; border-color: #ffeaa7; }
|
||||
.info { background-color: #d1ecf1; border-color: #bee5eb; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>Alert Report</h1>
|
||||
<p>Generated: $(date)</p>
|
||||
<p>Period: Since $start_date</p>
|
||||
</div>
|
||||
|
||||
<div class="summary">
|
||||
<h2>Summary</h2>
|
||||
<p>Total Alerts: $(echo "$alert_data" | jq -r 'map(.count) | add // 0')</p>
|
||||
</div>
|
||||
|
||||
<div class="alerts">
|
||||
<h2>Alerts by Severity</h2>
|
||||
EOF
|
||||
|
||||
# Add alerts by severity
|
||||
echo "$alert_data" | jq -r '.[] | "\(.severity): \(.count)"' | while IFS=':' read -r severity count; do
|
||||
if [[ -n "$severity" ]]; then
|
||||
cat >> "$report_file" << EOF
|
||||
<div class="severity $severity">
|
||||
<h3>$severity ($count)</h3>
|
||||
EOF
|
||||
|
||||
# Add individual alerts
|
||||
echo "$alert_data" | jq -r --arg sev "$severity" '.[] | select(.severity == $sev) | .alerts[] | "\(.source): \(.message) (\(.timestamp))"' | while IFS=':' read -r source message; do
|
||||
if [[ -n "$source" ]]; then
|
||||
cat >> "$report_file" << EOF
|
||||
<p><strong>$source</strong>: $message</p>
|
||||
EOF
|
||||
fi
|
||||
done
|
||||
|
||||
cat >> "$report_file" << EOF
|
||||
</div>
|
||||
EOF
|
||||
fi
|
||||
done
|
||||
|
||||
cat >> "$report_file" << EOF
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
}
|
||||
|
||||
# Monitoring command handler
|
||||
handle_monitoring_command() {
|
||||
local command="$1"
|
||||
shift
|
||||
|
||||
case "$command" in
|
||||
"init")
|
||||
init_monitoring_agent
|
||||
;;
|
||||
"check")
|
||||
run_monitoring_checks
|
||||
;;
|
||||
"policy")
|
||||
local policy_command="$1"
|
||||
shift
|
||||
case "$policy_command" in
|
||||
"create")
|
||||
local policy_name="$1"
|
||||
local policy_file="$2"
|
||||
create_alert_policy "$policy_name" "$policy_file"
|
||||
;;
|
||||
"list")
|
||||
local format="$1"
|
||||
list_alert_policies "$format"
|
||||
;;
|
||||
*)
|
||||
echo "Policy commands: create, list"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"history")
|
||||
local source="$1"
|
||||
local severity="$2"
|
||||
local days="$3"
|
||||
local format="$4"
|
||||
query_alert_history "$source" "$severity" "$days" "$format"
|
||||
;;
|
||||
"report")
|
||||
local period="$1"
|
||||
local format="$2"
|
||||
generate_alert_report "$period" "$format"
|
||||
;;
|
||||
"help"|*)
|
||||
echo "Monitoring & Alerting Commands:"
|
||||
echo "=============================="
|
||||
echo " init - Initialize monitoring system"
|
||||
echo " check - Run monitoring checks"
|
||||
echo " policy create <name> <file> - Create alert policy"
|
||||
echo " policy list [format] - List alert policies"
|
||||
echo " history [source] [severity] [days] [format] - Query alert history"
|
||||
echo " report [period] [format] - Generate alert report"
|
||||
echo " help - Show this help"
|
||||
echo ""
|
||||
echo "Supported Alert Channels:"
|
||||
echo " EMAIL, WEBHOOK, SIEM, PROMETHEUS, GRAFANA, SLACK, TEAMS, CUSTOM"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
|
@ -1,877 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Cloud Integration Scriptlet for apt-layer
|
||||
# Provides cloud provider integrations (AWS, Azure, GCP) for cloud-native deployment
|
||||
|
||||
# Cloud integration functions
|
||||
cloud_integration_init() {
|
||||
log_info "Initializing cloud integration system..."
|
||||
|
||||
# Create cloud integration directories
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/cloud"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/cloud/aws"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/cloud/azure"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/cloud/gcp"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/cloud/configs"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/cloud/credentials"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/cloud/deployments"
|
||||
|
||||
# Initialize cloud configuration database
|
||||
if [[ ! -f "${PARTICLE_WORKSPACE}/cloud/cloud-config.json" ]]; then
|
||||
cat > "${PARTICLE_WORKSPACE}/cloud/cloud-config.json" << 'EOF'
|
||||
{
|
||||
"providers": {
|
||||
"aws": {
|
||||
"enabled": false,
|
||||
"regions": [],
|
||||
"services": {
|
||||
"ecr": false,
|
||||
"s3": false,
|
||||
"ec2": false,
|
||||
"eks": false
|
||||
},
|
||||
"credentials": {
|
||||
"profile": "",
|
||||
"access_key": "",
|
||||
"secret_key": ""
|
||||
}
|
||||
},
|
||||
"azure": {
|
||||
"enabled": false,
|
||||
"subscriptions": [],
|
||||
"services": {
|
||||
"acr": false,
|
||||
"storage": false,
|
||||
"vm": false,
|
||||
"aks": false
|
||||
},
|
||||
"credentials": {
|
||||
"tenant_id": "",
|
||||
"client_id": "",
|
||||
"client_secret": ""
|
||||
}
|
||||
},
|
||||
"gcp": {
|
||||
"enabled": false,
|
||||
"projects": [],
|
||||
"services": {
|
||||
"gcr": false,
|
||||
"storage": false,
|
||||
"compute": false,
|
||||
"gke": false
|
||||
},
|
||||
"credentials": {
|
||||
"service_account": "",
|
||||
"project_id": ""
|
||||
}
|
||||
}
|
||||
},
|
||||
"deployments": [],
|
||||
"last_updated": ""
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
log_success "Cloud integration system initialized"
|
||||
}
|
||||
|
||||
# AWS Integration Functions
|
||||
aws_init() {
|
||||
log_info "Initializing AWS integration..."
|
||||
|
||||
# Check for AWS CLI
|
||||
if ! command -v aws &> /dev/null; then
|
||||
log_error "AWS CLI not found. Please install awscli package."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check AWS credentials
|
||||
if ! aws sts get-caller-identity &> /dev/null; then
|
||||
log_warning "AWS credentials not configured. Please run 'aws configure' first."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get AWS account info
|
||||
local account_id=$(aws sts get-caller-identity --query Account --output text)
|
||||
local user_arn=$(aws sts get-caller-identity --query Arn --output text)
|
||||
|
||||
log_info "AWS Account ID: ${account_id}"
|
||||
log_info "AWS User ARN: ${user_arn}"
|
||||
|
||||
# Update cloud config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg account_id "$account_id" --arg user_arn "$user_arn" \
|
||||
'.providers.aws.enabled = true | .providers.aws.account_id = $account_id | .providers.aws.user_arn = $user_arn' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "AWS integration initialized"
|
||||
}
|
||||
|
||||
aws_configure_services() {
|
||||
local services=("$@")
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
|
||||
log_info "Configuring AWS services: ${services[*]}"
|
||||
|
||||
for service in "${services[@]}"; do
|
||||
case "$service" in
|
||||
"ecr")
|
||||
aws_configure_ecr
|
||||
;;
|
||||
"s3")
|
||||
aws_configure_s3
|
||||
;;
|
||||
"ec2")
|
||||
aws_configure_ec2
|
||||
;;
|
||||
"eks")
|
||||
aws_configure_eks
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown AWS service: $service"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
aws_configure_ecr() {
|
||||
log_info "Configuring AWS ECR..."
|
||||
|
||||
# Get default region
|
||||
local region=$(aws configure get region)
|
||||
if [[ -z "$region" ]]; then
|
||||
region="us-east-1"
|
||||
log_info "Using default region: $region"
|
||||
fi
|
||||
|
||||
# Create ECR repository if it doesn't exist
|
||||
local repo_name="ubuntu-ublue-layers"
|
||||
if ! aws ecr describe-repositories --repository-names "$repo_name" --region "$region" &> /dev/null; then
|
||||
log_info "Creating ECR repository: $repo_name"
|
||||
aws ecr create-repository --repository-name "$repo_name" --region "$region"
|
||||
fi
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg region "$region" --arg repo "$repo_name" \
|
||||
'.providers.aws.services.ecr = true | .providers.aws.ecr.region = $region | .providers.aws.ecr.repository = $repo' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "AWS ECR configured"
|
||||
}
|
||||
|
||||
aws_configure_s3() {
|
||||
log_info "Configuring AWS S3..."
|
||||
|
||||
# Get default region
|
||||
local region=$(aws configure get region)
|
||||
if [[ -z "$region" ]]; then
|
||||
region="us-east-1"
|
||||
fi
|
||||
|
||||
# Create S3 bucket if it doesn't exist
|
||||
local bucket_name="ubuntu-ublue-layers-$(date +%s)"
|
||||
if ! aws s3api head-bucket --bucket "$bucket_name" --region "$region" &> /dev/null; then
|
||||
log_info "Creating S3 bucket: $bucket_name"
|
||||
aws s3api create-bucket --bucket "$bucket_name" --region "$region"
|
||||
fi
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg region "$region" --arg bucket "$bucket_name" \
|
||||
'.providers.aws.services.s3 = true | .providers.aws.s3.region = $region | .providers.aws.s3.bucket = $bucket' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "AWS S3 configured"
|
||||
}
|
||||
|
||||
aws_configure_ec2() {
|
||||
log_info "Configuring AWS EC2..."
|
||||
|
||||
# Get available regions
|
||||
local regions=$(aws ec2 describe-regions --query 'Regions[].RegionName' --output text)
|
||||
log_info "Available AWS regions: $regions"
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq '.providers.aws.services.ec2 = true' "$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "AWS EC2 configured"
|
||||
}
|
||||
|
||||
aws_configure_eks() {
|
||||
log_info "Configuring AWS EKS..."
|
||||
|
||||
# Check for kubectl
|
||||
if ! command -v kubectl &> /dev/null; then
|
||||
log_warning "kubectl not found. Please install kubectl for EKS integration."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq '.providers.aws.services.eks = true' "$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "AWS EKS configured"
|
||||
}
|
||||
|
||||
# Azure Integration Functions
|
||||
azure_init() {
|
||||
log_info "Initializing Azure integration..."
|
||||
|
||||
# Check for Azure CLI
|
||||
if ! command -v az &> /dev/null; then
|
||||
log_error "Azure CLI not found. Please install azure-cli package."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check Azure login
|
||||
if ! az account show &> /dev/null; then
|
||||
log_warning "Azure not logged in. Please run 'az login' first."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get Azure account info
|
||||
local subscription_id=$(az account show --query id --output tsv)
|
||||
local tenant_id=$(az account show --query tenantId --output tsv)
|
||||
local user_name=$(az account show --query user.name --output tsv)
|
||||
|
||||
log_info "Azure Subscription ID: $subscription_id"
|
||||
log_info "Azure Tenant ID: $tenant_id"
|
||||
log_info "Azure User: $user_name"
|
||||
|
||||
# Update cloud config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg sub_id "$subscription_id" --arg tenant_id "$tenant_id" --arg user "$user_name" \
|
||||
'.providers.azure.enabled = true | .providers.azure.subscription_id = $sub_id | .providers.azure.tenant_id = $tenant_id | .providers.azure.user = $user' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "Azure integration initialized"
|
||||
}
|
||||
|
||||
azure_configure_services() {
|
||||
local services=("$@")
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
|
||||
log_info "Configuring Azure services: ${services[*]}"
|
||||
|
||||
for service in "${services[@]}"; do
|
||||
case "$service" in
|
||||
"acr")
|
||||
azure_configure_acr
|
||||
;;
|
||||
"storage")
|
||||
azure_configure_storage
|
||||
;;
|
||||
"vm")
|
||||
azure_configure_vm
|
||||
;;
|
||||
"aks")
|
||||
azure_configure_aks
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown Azure service: $service"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
azure_configure_acr() {
|
||||
log_info "Configuring Azure Container Registry..."
|
||||
|
||||
# Get resource group
|
||||
local resource_group="ubuntu-ublue-rg"
|
||||
local location="eastus"
|
||||
local acr_name="ubuntuublueacr$(date +%s)"
|
||||
|
||||
# Create resource group if it doesn't exist
|
||||
if ! az group show --name "$resource_group" &> /dev/null; then
|
||||
log_info "Creating resource group: $resource_group"
|
||||
az group create --name "$resource_group" --location "$location"
|
||||
fi
|
||||
|
||||
# Create ACR if it doesn't exist
|
||||
if ! az acr show --name "$acr_name" --resource-group "$resource_group" &> /dev/null; then
|
||||
log_info "Creating Azure Container Registry: $acr_name"
|
||||
az acr create --resource-group "$resource_group" --name "$acr_name" --sku Basic
|
||||
fi
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg rg "$resource_group" --arg location "$location" --arg acr "$acr_name" \
|
||||
'.providers.azure.services.acr = true | .providers.azure.acr.resource_group = $rg | .providers.azure.acr.location = $location | .providers.azure.acr.name = $acr' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "Azure ACR configured"
|
||||
}
|
||||
|
||||
azure_configure_storage() {
|
||||
log_info "Configuring Azure Storage..."
|
||||
|
||||
local resource_group="ubuntu-ublue-rg"
|
||||
local location="eastus"
|
||||
local storage_account="ubuntuubluestorage$(date +%s)"
|
||||
|
||||
# Create storage account if it doesn't exist
|
||||
if ! az storage account show --name "$storage_account" --resource-group "$resource_group" &> /dev/null; then
|
||||
log_info "Creating storage account: $storage_account"
|
||||
az storage account create --resource-group "$resource_group" --name "$storage_account" --location "$location" --sku Standard_LRS
|
||||
fi
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg rg "$resource_group" --arg location "$location" --arg sa "$storage_account" \
|
||||
'.providers.azure.services.storage = true | .providers.azure.storage.resource_group = $rg | .providers.azure.storage.location = $location | .providers.azure.storage.account = $sa' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "Azure Storage configured"
|
||||
}
|
||||
|
||||
azure_configure_vm() {
|
||||
log_info "Configuring Azure VM..."
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq '.providers.azure.services.vm = true' "$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "Azure VM configured"
|
||||
}
|
||||
|
||||
azure_configure_aks() {
|
||||
log_info "Configuring Azure AKS..."
|
||||
|
||||
# Check for kubectl
|
||||
if ! command -v kubectl &> /dev/null; then
|
||||
log_warning "kubectl not found. Please install kubectl for AKS integration."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq '.providers.azure.services.aks = true' "$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "Azure AKS configured"
|
||||
}
|
||||
|
||||
# GCP Integration Functions
|
||||
gcp_init() {
|
||||
log_info "Initializing GCP integration..."
|
||||
|
||||
# Check for gcloud CLI
|
||||
if ! command -v gcloud &> /dev/null; then
|
||||
log_error "Google Cloud CLI not found. Please install google-cloud-cli package."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check GCP authentication
|
||||
if ! gcloud auth list --filter=status:ACTIVE --format="value(account)" | grep -q .; then
|
||||
log_warning "GCP not authenticated. Please run 'gcloud auth login' first."
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get GCP project info
|
||||
local project_id=$(gcloud config get-value project)
|
||||
local account=$(gcloud auth list --filter=status:ACTIVE --format="value(account)" | head -1)
|
||||
|
||||
log_info "GCP Project ID: $project_id"
|
||||
log_info "GCP Account: $account"
|
||||
|
||||
# Update cloud config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg project_id "$project_id" --arg account "$account" \
|
||||
'.providers.gcp.enabled = true | .providers.gcp.project_id = $project_id | .providers.gcp.account = $account' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "GCP integration initialized"
|
||||
}
|
||||
|
||||
gcp_configure_services() {
|
||||
local services=("$@")
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
|
||||
log_info "Configuring GCP services: ${services[*]}"
|
||||
|
||||
for service in "${services[@]}"; do
|
||||
case "$service" in
|
||||
"gcr")
|
||||
gcp_configure_gcr
|
||||
;;
|
||||
"storage")
|
||||
gcp_configure_storage
|
||||
;;
|
||||
"compute")
|
||||
gcp_configure_compute
|
||||
;;
|
||||
"gke")
|
||||
gcp_configure_gke
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown GCP service: $service"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
gcp_configure_gcr() {
|
||||
log_info "Configuring Google Container Registry..."
|
||||
|
||||
local project_id=$(gcloud config get-value project)
|
||||
local region="us-central1"
|
||||
|
||||
# Enable Container Registry API
|
||||
gcloud services enable containerregistry.googleapis.com --project="$project_id"
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg project_id "$project_id" --arg region "$region" \
|
||||
'.providers.gcp.services.gcr = true | .providers.gcp.gcr.project_id = $project_id | .providers.gcp.gcr.region = $region' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "GCP Container Registry configured"
|
||||
}
|
||||
|
||||
gcp_configure_storage() {
|
||||
log_info "Configuring Google Cloud Storage..."
|
||||
|
||||
local project_id=$(gcloud config get-value project)
|
||||
local bucket_name="ubuntu-ublue-layers-$(date +%s)"
|
||||
local location="US"
|
||||
|
||||
# Create storage bucket if it doesn't exist
|
||||
if ! gsutil ls -b "gs://$bucket_name" &> /dev/null; then
|
||||
log_info "Creating storage bucket: $bucket_name"
|
||||
gsutil mb -p "$project_id" -c STANDARD -l "$location" "gs://$bucket_name"
|
||||
fi
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg project_id "$project_id" --arg bucket "$bucket_name" --arg location "$location" \
|
||||
'.providers.gcp.services.storage = true | .providers.gcp.storage.project_id = $project_id | .providers.gcp.storage.bucket = $bucket | .providers.gcp.storage.location = $location' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "GCP Storage configured"
|
||||
}
|
||||
|
||||
gcp_configure_compute() {
|
||||
log_info "Configuring Google Compute Engine..."
|
||||
|
||||
local project_id=$(gcloud config get-value project)
|
||||
|
||||
# Enable Compute Engine API
|
||||
gcloud services enable compute.googleapis.com --project="$project_id"
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg project_id "$project_id" \
|
||||
'.providers.gcp.services.compute = true | .providers.gcp.compute.project_id = $project_id' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "GCP Compute Engine configured"
|
||||
}
|
||||
|
||||
gcp_configure_gke() {
|
||||
log_info "Configuring Google Kubernetes Engine..."
|
||||
|
||||
# Check for kubectl
|
||||
if ! command -v kubectl &> /dev/null; then
|
||||
log_warning "kubectl not found. Please install kubectl for GKE integration."
|
||||
return 1
|
||||
fi
|
||||
|
||||
local project_id=$(gcloud config get-value project)
|
||||
|
||||
# Enable GKE API
|
||||
gcloud services enable container.googleapis.com --project="$project_id"
|
||||
|
||||
# Update config
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
jq --arg project_id "$project_id" \
|
||||
'.providers.gcp.services.gke = true | .providers.gcp.gke.project_id = $project_id' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
|
||||
log_success "GCP GKE configured"
|
||||
}
|
||||
|
||||
# Cloud Deployment Functions
|
||||
cloud_deploy_layer() {
|
||||
local layer_name="$1"
|
||||
local provider="$2"
|
||||
local service="$3"
|
||||
shift 3
|
||||
local options=("$@")
|
||||
|
||||
log_info "Deploying layer $layer_name to $provider $service"
|
||||
|
||||
case "$provider" in
|
||||
"aws")
|
||||
case "$service" in
|
||||
"ecr")
|
||||
aws_deploy_to_ecr "$layer_name" "${options[@]}"
|
||||
;;
|
||||
"s3")
|
||||
aws_deploy_to_s3 "$layer_name" "${options[@]}"
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown AWS service: $service"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"azure")
|
||||
case "$service" in
|
||||
"acr")
|
||||
azure_deploy_to_acr "$layer_name" "${options[@]}"
|
||||
;;
|
||||
"storage")
|
||||
azure_deploy_to_storage "$layer_name" "${options[@]}"
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown Azure service: $service"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"gcp")
|
||||
case "$service" in
|
||||
"gcr")
|
||||
gcp_deploy_to_gcr "$layer_name" "${options[@]}"
|
||||
;;
|
||||
"storage")
|
||||
gcp_deploy_to_storage "$layer_name" "${options[@]}"
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown GCP service: $service"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown cloud provider: $provider"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
aws_deploy_to_ecr() {
|
||||
local layer_name="$1"
|
||||
shift
|
||||
local options=("$@")
|
||||
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local region=$(jq -r '.providers.aws.ecr.region' "$config_file")
|
||||
local repo=$(jq -r '.providers.aws.ecr.repository' "$config_file")
|
||||
local account_id=$(jq -r '.providers.aws.account_id' "$config_file")
|
||||
|
||||
log_info "Deploying $layer_name to AWS ECR"
|
||||
|
||||
# Get ECR login token
|
||||
aws ecr get-login-password --region "$region" | docker login --username AWS --password-stdin "$account_id.dkr.ecr.$region.amazonaws.com"
|
||||
|
||||
# Tag and push image
|
||||
local image_tag="$account_id.dkr.ecr.$region.amazonaws.com/$repo:$layer_name"
|
||||
docker tag "$layer_name" "$image_tag"
|
||||
docker push "$image_tag"
|
||||
|
||||
log_success "Layer $layer_name deployed to AWS ECR"
|
||||
}
|
||||
|
||||
aws_deploy_to_s3() {
|
||||
local layer_name="$1"
|
||||
shift
|
||||
local options=("$@")
|
||||
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local bucket=$(jq -r '.providers.aws.s3.bucket' "$config_file")
|
||||
local region=$(jq -r '.providers.aws.s3.region' "$config_file")
|
||||
|
||||
log_info "Deploying $layer_name to AWS S3"
|
||||
|
||||
# Create layer archive
|
||||
local archive_file="${PARTICLE_WORKSPACE}/cloud/deployments/${layer_name}.tar.gz"
|
||||
tar -czf "$archive_file" -C "${PARTICLE_WORKSPACE}/layers" "$layer_name"
|
||||
|
||||
# Upload to S3
|
||||
aws s3 cp "$archive_file" "s3://$bucket/layers/$layer_name.tar.gz" --region "$region"
|
||||
|
||||
log_success "Layer $layer_name deployed to AWS S3"
|
||||
}
|
||||
|
||||
azure_deploy_to_acr() {
|
||||
local layer_name="$1"
|
||||
shift
|
||||
local options=("$@")
|
||||
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local acr_name=$(jq -r '.providers.azure.acr.name' "$config_file")
|
||||
local resource_group=$(jq -r '.providers.azure.acr.resource_group' "$config_file")
|
||||
|
||||
log_info "Deploying $layer_name to Azure ACR"
|
||||
|
||||
# Get ACR login server
|
||||
local login_server=$(az acr show --name "$acr_name" --resource-group "$resource_group" --query loginServer --output tsv)
|
||||
|
||||
# Login to ACR
|
||||
az acr login --name "$acr_name"
|
||||
|
||||
# Tag and push image
|
||||
local image_tag="$login_server/$layer_name:latest"
|
||||
docker tag "$layer_name" "$image_tag"
|
||||
docker push "$image_tag"
|
||||
|
||||
log_success "Layer $layer_name deployed to Azure ACR"
|
||||
}
|
||||
|
||||
azure_deploy_to_storage() {
|
||||
local layer_name="$1"
|
||||
shift
|
||||
local options=("$@")
|
||||
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local storage_account=$(jq -r '.providers.azure.storage.account' "$config_file")
|
||||
local resource_group=$(jq -r '.providers.azure.storage.resource_group' "$config_file")
|
||||
|
||||
log_info "Deploying $layer_name to Azure Storage"
|
||||
|
||||
# Create layer archive
|
||||
local archive_file="${PARTICLE_WORKSPACE}/cloud/deployments/${layer_name}.tar.gz"
|
||||
tar -czf "$archive_file" -C "${PARTICLE_WORKSPACE}/layers" "$layer_name"
|
||||
|
||||
# Upload to Azure Storage
|
||||
az storage blob upload --account-name "$storage_account" --container-name layers --name "$layer_name.tar.gz" --file "$archive_file"
|
||||
|
||||
log_success "Layer $layer_name deployed to Azure Storage"
|
||||
}
|
||||
|
||||
gcp_deploy_to_gcr() {
|
||||
local layer_name="$1"
|
||||
shift
|
||||
local options=("$@")
|
||||
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local project_id=$(jq -r '.providers.gcp.gcr.project_id' "$config_file")
|
||||
local region=$(jq -r '.providers.gcp.gcr.region' "$config_file")
|
||||
|
||||
log_info "Deploying $layer_name to Google Container Registry"
|
||||
|
||||
# Configure docker for GCR
|
||||
gcloud auth configure-docker --project="$project_id"
|
||||
|
||||
# Tag and push image
|
||||
local image_tag="gcr.io/$project_id/$layer_name:latest"
|
||||
docker tag "$layer_name" "$image_tag"
|
||||
docker push "$image_tag"
|
||||
|
||||
log_success "Layer $layer_name deployed to Google Container Registry"
|
||||
}
|
||||
|
||||
gcp_deploy_to_storage() {
|
||||
local layer_name="$1"
|
||||
shift
|
||||
local options=("$@")
|
||||
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local bucket=$(jq -r '.providers.gcp.storage.bucket' "$config_file")
|
||||
local project_id=$(jq -r '.providers.gcp.storage.project_id' "$config_file")
|
||||
|
||||
log_info "Deploying $layer_name to Google Cloud Storage"
|
||||
|
||||
# Create layer archive
|
||||
local archive_file="${PARTICLE_WORKSPACE}/cloud/deployments/${layer_name}.tar.gz"
|
||||
tar -czf "$archive_file" -C "${PARTICLE_WORKSPACE}/layers" "$layer_name"
|
||||
|
||||
# Upload to GCS
|
||||
gsutil cp "$archive_file" "gs://$bucket/layers/$layer_name.tar.gz"
|
||||
|
||||
log_success "Layer $layer_name deployed to Google Cloud Storage"
|
||||
}
|
||||
|
||||
# Cloud Status and Management Functions
|
||||
cloud_status() {
|
||||
local provider="$1"
|
||||
|
||||
if [[ -z "$provider" ]]; then
|
||||
log_info "Cloud integration status:"
|
||||
echo
|
||||
cloud_status_aws
|
||||
echo
|
||||
cloud_status_azure
|
||||
echo
|
||||
cloud_status_gcp
|
||||
return 0
|
||||
fi
|
||||
|
||||
case "$provider" in
|
||||
"aws")
|
||||
cloud_status_aws
|
||||
;;
|
||||
"azure")
|
||||
cloud_status_azure
|
||||
;;
|
||||
"gcp")
|
||||
cloud_status_gcp
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown cloud provider: $provider"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
cloud_status_aws() {
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local enabled=$(jq -r '.providers.aws.enabled' "$config_file")
|
||||
|
||||
echo "AWS Integration:"
|
||||
if [[ "$enabled" == "true" ]]; then
|
||||
echo " Status: ${GREEN}Enabled${NC}"
|
||||
local account_id=$(jq -r '.providers.aws.account_id' "$config_file")
|
||||
echo " Account ID: $account_id"
|
||||
|
||||
# Check services
|
||||
local services=$(jq -r '.providers.aws.services | to_entries[] | select(.value == true) | .key' "$config_file")
|
||||
if [[ -n "$services" ]]; then
|
||||
echo " Enabled Services:"
|
||||
echo "$services" | while read -r service; do
|
||||
echo " - $service"
|
||||
done
|
||||
fi
|
||||
else
|
||||
echo " Status: ${RED}Disabled${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
cloud_status_azure() {
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local enabled=$(jq -r '.providers.azure.enabled' "$config_file")
|
||||
|
||||
echo "Azure Integration:"
|
||||
if [[ "$enabled" == "true" ]]; then
|
||||
echo " Status: ${GREEN}Enabled${NC}"
|
||||
local subscription_id=$(jq -r '.providers.azure.subscription_id' "$config_file")
|
||||
echo " Subscription ID: $subscription_id"
|
||||
|
||||
# Check services
|
||||
local services=$(jq -r '.providers.azure.services | to_entries[] | select(.value == true) | .key' "$config_file")
|
||||
if [[ -n "$services" ]]; then
|
||||
echo " Enabled Services:"
|
||||
echo "$services" | while read -r service; do
|
||||
echo " - $service"
|
||||
done
|
||||
fi
|
||||
else
|
||||
echo " Status: ${RED}Disabled${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
cloud_status_gcp() {
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local enabled=$(jq -r '.providers.gcp.enabled' "$config_file")
|
||||
|
||||
echo "GCP Integration:"
|
||||
if [[ "$enabled" == "true" ]]; then
|
||||
echo " Status: ${GREEN}Enabled${NC}"
|
||||
local project_id=$(jq -r '.providers.gcp.project_id' "$config_file")
|
||||
echo " Project ID: $project_id"
|
||||
|
||||
# Check services
|
||||
local services=$(jq -r '.providers.gcp.services | to_entries[] | select(.value == true) | .key' "$config_file")
|
||||
if [[ -n "$services" ]]; then
|
||||
echo " Enabled Services:"
|
||||
echo "$services" | while read -r service; do
|
||||
echo " - $service"
|
||||
done
|
||||
fi
|
||||
else
|
||||
echo " Status: ${RED}Disabled${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
cloud_list_deployments() {
|
||||
local config_file="${PARTICLE_WORKSPACE}/cloud/cloud-config.json"
|
||||
local deployments_file="${PARTICLE_WORKSPACE}/cloud/deployments/deployments.json"
|
||||
|
||||
if [[ ! -f "$deployments_file" ]]; then
|
||||
log_info "No deployments found"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Cloud deployments:"
|
||||
jq -r '.deployments[] | "\(.layer_name) -> \(.provider)/\(.service) (\(.timestamp))"' "$deployments_file"
|
||||
}
|
||||
|
||||
# Cloud cleanup functions
|
||||
cloud_cleanup() {
|
||||
local provider="$1"
|
||||
local service="$2"
|
||||
|
||||
log_info "Cleaning up cloud resources"
|
||||
|
||||
case "$provider" in
|
||||
"aws")
|
||||
aws_cleanup "$service"
|
||||
;;
|
||||
"azure")
|
||||
azure_cleanup "$service"
|
||||
;;
|
||||
"gcp")
|
||||
gcp_cleanup "$service"
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown cloud provider: $provider"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
aws_cleanup() {
|
||||
local service="$1"
|
||||
|
||||
case "$service" in
|
||||
"ecr")
|
||||
log_info "Cleaning up AWS ECR resources"
|
||||
# Implementation for ECR cleanup
|
||||
;;
|
||||
"s3")
|
||||
log_info "Cleaning up AWS S3 resources"
|
||||
# Implementation for S3 cleanup
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown AWS service for cleanup: $service"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
azure_cleanup() {
|
||||
local service="$1"
|
||||
|
||||
case "$service" in
|
||||
"acr")
|
||||
log_info "Cleaning up Azure ACR resources"
|
||||
# Implementation for ACR cleanup
|
||||
;;
|
||||
"storage")
|
||||
log_info "Cleaning up Azure Storage resources"
|
||||
# Implementation for Storage cleanup
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown Azure service for cleanup: $service"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
gcp_cleanup() {
|
||||
local service="$1"
|
||||
|
||||
case "$service" in
|
||||
"gcr")
|
||||
log_info "Cleaning up GCP Container Registry resources"
|
||||
# Implementation for GCR cleanup
|
||||
;;
|
||||
"storage")
|
||||
log_info "Cleaning up GCP Storage resources"
|
||||
# Implementation for Storage cleanup
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown GCP service for cleanup: $service"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
|
@ -1,135 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Multi-Cloud Deployment Scriptlet for apt-layer
|
||||
# Provides unified multi-cloud deployment, migration, and management
|
||||
|
||||
# === Initialization ===
|
||||
multicloud_init() {
|
||||
log_info "Initializing multi-cloud deployment system..."
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/multicloud"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/multicloud/profiles"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/multicloud/deployments"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/multicloud/migrations"
|
||||
mkdir -p "${PARTICLE_WORKSPACE}/multicloud/logs"
|
||||
# Create config if missing
|
||||
if [[ ! -f "${PARTICLE_WORKSPACE}/multicloud/multicloud-config.json" ]]; then
|
||||
cat > "${PARTICLE_WORKSPACE}/multicloud/multicloud-config.json" << 'EOF'
|
||||
{
|
||||
"profiles": {},
|
||||
"deployments": {},
|
||||
"migrations": {},
|
||||
"policies": {},
|
||||
"last_updated": ""
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
log_success "Multi-cloud deployment system initialized"
|
||||
}
|
||||
|
||||
# === Cloud Profile Management ===
|
||||
multicloud_add_profile() {
|
||||
local provider="$1"
|
||||
local profile_name="$2"
|
||||
local credentials_file="$3"
|
||||
if [[ -z "$provider" || -z "$profile_name" || -z "$credentials_file" ]]; then
|
||||
log_error "Provider, profile name, and credentials file required"
|
||||
return 1
|
||||
fi
|
||||
log_info "Adding multi-cloud profile: $profile_name ($provider)"
|
||||
local config_file="${PARTICLE_WORKSPACE}/multicloud/multicloud-config.json"
|
||||
jq --arg provider "$provider" --arg name "$profile_name" --arg creds "$credentials_file" \
|
||||
'.profiles[$name] = {"provider": $provider, "credentials": $creds, "created": now}' \
|
||||
"$config_file" > "${config_file}.tmp" && mv "${config_file}.tmp" "$config_file"
|
||||
log_success "Profile $profile_name added"
|
||||
}
|
||||
|
||||
multicloud_list_profiles() {
|
||||
local config_file="${PARTICLE_WORKSPACE}/multicloud/multicloud-config.json"
|
||||
jq '.profiles' "$config_file"
|
||||
}
|
||||
|
||||
# === Unified Multi-Cloud Deployment ===
|
||||
multicloud_deploy() {
|
||||
local layer_name="$1"
|
||||
local provider="$2"
|
||||
local profile_name="$3"
|
||||
local region="$4"
|
||||
local options="$5"
|
||||
if [[ -z "$layer_name" || -z "$provider" ]]; then
|
||||
log_error "Layer name and provider required for multi-cloud deployment"
|
||||
return 1
|
||||
fi
|
||||
log_info "Deploying $layer_name to $provider (profile: $profile_name, region: $region)"
|
||||
case "$provider" in
|
||||
aws)
|
||||
multicloud_deploy_aws "$layer_name" "$profile_name" "$region" "$options"
|
||||
;;
|
||||
azure)
|
||||
multicloud_deploy_azure "$layer_name" "$profile_name" "$region" "$options"
|
||||
;;
|
||||
gcp)
|
||||
multicloud_deploy_gcp "$layer_name" "$profile_name" "$region" "$options"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported provider: $provider"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
multicloud_deploy_aws() {
|
||||
local layer_name="$1"; local profile="$2"; local region="$3"; local options="$4"
|
||||
log_info "[AWS] Deploying $layer_name (profile: $profile, region: $region)"
|
||||
# TODO: Implement AWS deployment logic
|
||||
log_success "[AWS] Deployment stub complete"
|
||||
}
|
||||
|
||||
multicloud_deploy_azure() {
|
||||
local layer_name="$1"; local profile="$2"; local region="$3"; local options="$4"
|
||||
log_info "[Azure] Deploying $layer_name (profile: $profile, region: $region)"
|
||||
# TODO: Implement Azure deployment logic
|
||||
log_success "[Azure] Deployment stub complete"
|
||||
}
|
||||
|
||||
multicloud_deploy_gcp() {
|
||||
local layer_name="$1"; local profile="$2"; local region="$3"; local options="$4"
|
||||
log_info "[GCP] Deploying $layer_name (profile: $profile, region: $region)"
|
||||
# TODO: Implement GCP deployment logic
|
||||
log_success "[GCP] Deployment stub complete"
|
||||
}
|
||||
|
||||
# === Cross-Cloud Migration ===
|
||||
multicloud_migrate() {
|
||||
local layer_name="$1"
|
||||
local from_provider="$2"
|
||||
local to_provider="$3"
|
||||
local options="$4"
|
||||
if [[ -z "$layer_name" || -z "$from_provider" || -z "$to_provider" ]]; then
|
||||
log_error "Layer, from_provider, and to_provider required for migration"
|
||||
return 1
|
||||
fi
|
||||
log_info "Migrating $layer_name from $from_provider to $to_provider"
|
||||
# TODO: Implement migration logic (export, transfer, import)
|
||||
log_success "Migration stub complete"
|
||||
}
|
||||
|
||||
# === Multi-Cloud Status and Reporting ===
|
||||
multicloud_status() {
|
||||
local config_file="${PARTICLE_WORKSPACE}/multicloud/multicloud-config.json"
|
||||
echo "Multi-Cloud Profiles:"
|
||||
jq -r '.profiles | to_entries[] | " - \(.key): \(.value.provider)"' "$config_file"
|
||||
echo
|
||||
echo "Deployments:"
|
||||
jq -r '.deployments | to_entries[] | " - \(.key): \(.value.provider) (status: \(.value.status))"' "$config_file"
|
||||
echo
|
||||
echo "Migrations:"
|
||||
jq -r '.migrations | to_entries[] | " - \(.key): \(.value.from) -> \(.value.to) (status: \(.value.status))"' "$config_file"
|
||||
}
|
||||
|
||||
# === Policy-Driven Placement (Stub) ===
|
||||
multicloud_policy_apply() {
|
||||
local policy_name="$1"
|
||||
local layer_name="$2"
|
||||
log_info "Applying policy $policy_name to $layer_name"
|
||||
# TODO: Implement policy-driven placement logic
|
||||
log_success "Policy application stub complete"
|
||||
}
|
||||
|
|
@ -1,722 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Cloud-Native Security Features for apt-layer
|
||||
# Provides cloud workload security scanning, cloud provider security service integration,
|
||||
# policy enforcement, and automated vulnerability detection for cloud deployments.
|
||||
|
||||
# ============================================================================
|
||||
# CLOUD-NATIVE SECURITY FUNCTIONS
|
||||
# ============================================================================
|
||||
|
||||
# Initialize cloud security system
|
||||
cloud_security_init() {
|
||||
log_info "Initializing cloud security system..." "apt-layer"
|
||||
|
||||
# Create cloud security directories
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
mkdir -p "$cloud_security_dir"/{scans,policies,reports,integrations}
|
||||
|
||||
# Create cloud security configuration
|
||||
local config_file="$cloud_security_dir/cloud-security-config.json"
|
||||
if [[ ! -f "$config_file" ]]; then
|
||||
cat > "$config_file" << 'EOF'
|
||||
{
|
||||
"enabled_providers": ["aws", "azure", "gcp"],
|
||||
"scan_settings": {
|
||||
"container_scanning": true,
|
||||
"image_scanning": true,
|
||||
"layer_scanning": true,
|
||||
"infrastructure_scanning": true,
|
||||
"compliance_scanning": true
|
||||
},
|
||||
"policy_enforcement": {
|
||||
"iam_policies": true,
|
||||
"network_policies": true,
|
||||
"compliance_policies": true,
|
||||
"auto_remediation": false
|
||||
},
|
||||
"integrations": {
|
||||
"aws_inspector": false,
|
||||
"azure_defender": false,
|
||||
"gcp_security_center": false,
|
||||
"third_party_scanners": []
|
||||
},
|
||||
"reporting": {
|
||||
"html_reports": true,
|
||||
"json_reports": true,
|
||||
"email_alerts": false,
|
||||
"webhook_alerts": false
|
||||
},
|
||||
"retention": {
|
||||
"scan_reports_days": 30,
|
||||
"policy_violations_days": 90,
|
||||
"security_events_days": 365
|
||||
}
|
||||
}
|
||||
EOF
|
||||
log_info "Created cloud security configuration: $config_file" "apt-layer"
|
||||
fi
|
||||
|
||||
# Create policy templates
|
||||
local policies_dir="$cloud_security_dir/policies"
|
||||
mkdir -p "$policies_dir"
|
||||
|
||||
# IAM Policy Template
|
||||
cat > "$policies_dir/iam-policy-template.json" << 'EOF'
|
||||
{
|
||||
"name": "default-iam-policy",
|
||||
"description": "Default IAM policy for apt-layer deployments",
|
||||
"rules": [
|
||||
{
|
||||
"name": "least-privilege",
|
||||
"description": "Enforce least privilege access",
|
||||
"severity": "high",
|
||||
"enabled": true
|
||||
},
|
||||
{
|
||||
"name": "no-root-access",
|
||||
"description": "Prevent root access to resources",
|
||||
"severity": "critical",
|
||||
"enabled": true
|
||||
},
|
||||
{
|
||||
"name": "mfa-required",
|
||||
"description": "Require multi-factor authentication",
|
||||
"severity": "high",
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
# Network Policy Template
|
||||
cat > "$policies_dir/network-policy-template.json" << 'EOF'
|
||||
{
|
||||
"name": "default-network-policy",
|
||||
"description": "Default network policy for apt-layer deployments",
|
||||
"rules": [
|
||||
{
|
||||
"name": "secure-ports-only",
|
||||
"description": "Allow only secure ports (22, 80, 443, 8080)",
|
||||
"severity": "medium",
|
||||
"enabled": true
|
||||
},
|
||||
{
|
||||
"name": "no-public-access",
|
||||
"description": "Prevent public access to sensitive resources",
|
||||
"severity": "high",
|
||||
"enabled": true
|
||||
},
|
||||
{
|
||||
"name": "vpc-isolation",
|
||||
"description": "Enforce VPC isolation",
|
||||
"severity": "medium",
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
# Compliance Policy Template
|
||||
cat > "$policies_dir/compliance-policy-template.json" << 'EOF'
|
||||
{
|
||||
"name": "default-compliance-policy",
|
||||
"description": "Default compliance policy for apt-layer deployments",
|
||||
"frameworks": {
|
||||
"sox": {
|
||||
"enabled": true,
|
||||
"controls": ["access-control", "audit-logging", "data-protection"]
|
||||
},
|
||||
"pci-dss": {
|
||||
"enabled": true,
|
||||
"controls": ["network-security", "access-control", "vulnerability-management"]
|
||||
},
|
||||
"hipaa": {
|
||||
"enabled": false,
|
||||
"controls": ["privacy", "security", "breach-notification"]
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
log_info "Cloud security system initialized successfully" "apt-layer"
|
||||
log_info "Configuration: $config_file" "apt-layer"
|
||||
log_info "Policies: $policies_dir" "apt-layer"
|
||||
}
|
||||
|
||||
# Scan cloud workload for security vulnerabilities
|
||||
cloud_security_scan_workload() {
|
||||
local layer_name="$1"
|
||||
local provider="$2"
|
||||
local scan_type="${3:-comprehensive}"
|
||||
|
||||
log_info "Starting cloud security scan for layer: $layer_name (Provider: $provider, Type: $scan_type)" "apt-layer"
|
||||
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
local scan_dir="$cloud_security_dir/scans"
|
||||
local timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
local scan_id="${layer_name//\//_}_${provider}_${timestamp}"
|
||||
local scan_file="$scan_dir/${scan_id}.json"
|
||||
|
||||
# Create scan result structure
|
||||
local scan_result=$(cat << 'EOF'
|
||||
{
|
||||
"scan_id": "$scan_id",
|
||||
"layer_name": "$layer_name",
|
||||
"provider": "$provider",
|
||||
"scan_type": "$scan_type",
|
||||
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"status": "running",
|
||||
"findings": [],
|
||||
"summary": {
|
||||
"total_findings": 0,
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0,
|
||||
"info": 0
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
echo "$scan_result" > "$scan_file"
|
||||
|
||||
case "$scan_type" in
|
||||
"container")
|
||||
cloud_security_scan_container "$layer_name" "$provider" "$scan_file"
|
||||
;;
|
||||
"image")
|
||||
cloud_security_scan_image "$layer_name" "$provider" "$scan_file"
|
||||
;;
|
||||
"infrastructure")
|
||||
cloud_security_scan_infrastructure "$layer_name" "$provider" "$scan_file"
|
||||
;;
|
||||
"compliance")
|
||||
cloud_security_scan_compliance "$layer_name" "$provider" "$scan_file"
|
||||
;;
|
||||
"comprehensive")
|
||||
cloud_security_scan_container "$layer_name" "$provider" "$scan_file"
|
||||
cloud_security_scan_image "$layer_name" "$provider" "$scan_file"
|
||||
cloud_security_scan_infrastructure "$layer_name" "$provider" "$scan_file"
|
||||
cloud_security_scan_compliance "$layer_name" "$provider" "$scan_file"
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid scan type: $scan_type" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Update scan status to completed
|
||||
jq '.status = "completed"' "$scan_file" > "${scan_file}.tmp" && mv "${scan_file}.tmp" "$scan_file"
|
||||
|
||||
# Generate summary
|
||||
local total_findings=$(jq '.findings | length' "$scan_file")
|
||||
local critical=$(jq '.findings | map(select(.severity == "critical")) | length' "$scan_file")
|
||||
local high=$(jq '.findings | map(select(.severity == "high")) | length' "$scan_file")
|
||||
local medium=$(jq '.findings | map(select(.severity == "medium")) | length' "$scan_file")
|
||||
local low=$(jq '.findings | map(select(.severity == "low")) | length' "$scan_file")
|
||||
local info=$(jq '.findings | map(select(.severity == "info")) | length' "$scan_file")
|
||||
|
||||
jq --arg total "$total_findings" \
|
||||
--arg critical "$critical" \
|
||||
--arg high "$high" \
|
||||
--arg medium "$medium" \
|
||||
--arg low "$low" \
|
||||
--arg info "$info" \
|
||||
'.summary.total_findings = ($total | tonumber) |
|
||||
.summary.critical = ($critical | tonumber) |
|
||||
.summary.high = ($high | tonumber) |
|
||||
.summary.medium = ($medium | tonumber) |
|
||||
.summary.low = ($low | tonumber) |
|
||||
.summary.info = ($info | tonumber)' "$scan_file" > "${scan_file}.tmp" && mv "${scan_file}.tmp" "$scan_file"
|
||||
|
||||
log_info "Cloud security scan completed: $scan_file" "apt-layer"
|
||||
log_info "Findings: $total_findings total ($critical critical, $high high, $medium medium, $low low, $info info)" "apt-layer"
|
||||
|
||||
# Generate HTML report
|
||||
cloud_security_generate_report "$scan_file" "html"
|
||||
|
||||
echo "$scan_file"
|
||||
}
|
||||
|
||||
# Scan container security
|
||||
cloud_security_scan_container() {
|
||||
local layer_name="$1"
|
||||
local provider="$2"
|
||||
local scan_file="$3"
|
||||
|
||||
log_info "Scanning container security for layer: $layer_name" "apt-layer"
|
||||
|
||||
# Simulate container security findings
|
||||
local findings=(
|
||||
'{"id": "CONTAINER-001", "title": "Container running as root", "description": "Container is configured to run as root user", "severity": "high", "category": "privilege-escalation", "remediation": "Use non-root user in container"}'
|
||||
'{"id": "CONTAINER-002", "title": "Missing security context", "description": "Container lacks proper security context configuration", "severity": "medium", "category": "configuration", "remediation": "Configure security context with appropriate settings"}'
|
||||
'{"id": "CONTAINER-003", "title": "Unnecessary capabilities", "description": "Container has unnecessary Linux capabilities enabled", "severity": "medium", "category": "privilege-escalation", "remediation": "Drop unnecessary capabilities"}'
|
||||
)
|
||||
|
||||
for finding in "${findings[@]}"; do
|
||||
jq --argjson finding "$finding" '.findings += [$finding]' "$scan_file" > "${scan_file}.tmp" && mv "${scan_file}.tmp" "$scan_file"
|
||||
done
|
||||
}
|
||||
|
||||
# Scan image security
|
||||
cloud_security_scan_image() {
|
||||
local layer_name="$1"
|
||||
local provider="$2"
|
||||
local scan_file="$3"
|
||||
|
||||
log_info "Scanning image security for layer: $layer_name" "apt-layer"
|
||||
|
||||
# Simulate image security findings
|
||||
local findings=(
|
||||
'{"id": "IMAGE-001", "title": "Vulnerable base image", "description": "Base image contains known vulnerabilities", "severity": "critical", "category": "vulnerability", "remediation": "Update to latest base image version"}'
|
||||
'{"id": "IMAGE-002", "title": "Sensitive data in image", "description": "Image contains sensitive data or secrets", "severity": "high", "category": "data-exposure", "remediation": "Remove sensitive data and use secrets management"}'
|
||||
'{"id": "IMAGE-003", "title": "Large image size", "description": "Image size exceeds recommended limits", "severity": "low", "category": "performance", "remediation": "Optimize image layers and remove unnecessary files"}'
|
||||
)
|
||||
|
||||
for finding in "${findings[@]}"; do
|
||||
jq --argjson finding "$finding" '.findings += [$finding]' "$scan_file" > "${scan_file}.tmp" && mv "${scan_file}.tmp" "$scan_file"
|
||||
done
|
||||
}
|
||||
|
||||
# Scan infrastructure security
|
||||
cloud_security_scan_infrastructure() {
|
||||
local layer_name="$1"
|
||||
local provider="$2"
|
||||
local scan_file="$3"
|
||||
|
||||
log_info "Scanning infrastructure security for layer: $layer_name" "apt-layer"
|
||||
|
||||
# Simulate infrastructure security findings
|
||||
local findings=(
|
||||
'{"id": "INFRA-001", "title": "Public access enabled", "description": "Resource is publicly accessible", "severity": "high", "category": "network-security", "remediation": "Restrict access to private networks only"}'
|
||||
'{"id": "INFRA-002", "title": "Weak IAM policies", "description": "IAM policies are too permissive", "severity": "high", "category": "access-control", "remediation": "Apply principle of least privilege"}'
|
||||
'{"id": "INFRA-003", "title": "Missing encryption", "description": "Data is not encrypted at rest", "severity": "medium", "category": "data-protection", "remediation": "Enable encryption for all data storage"}'
|
||||
)
|
||||
|
||||
for finding in "${findings[@]}"; do
|
||||
jq --argjson finding "$finding" '.findings += [$finding]' "$scan_file" > "${scan_file}.tmp" && mv "${scan_file}.tmp" "$scan_file"
|
||||
done
|
||||
}
|
||||
|
||||
# Scan compliance
|
||||
cloud_security_scan_compliance() {
|
||||
local layer_name="$1"
|
||||
local provider="$2"
|
||||
local scan_file="$3"
|
||||
|
||||
log_info "Scanning compliance for layer: $layer_name" "apt-layer"
|
||||
|
||||
# Simulate compliance findings
|
||||
local findings=(
|
||||
'{"id": "COMPLIANCE-001", "title": "SOX Control Failure", "description": "Access control logging not properly configured", "severity": "high", "category": "sox", "remediation": "Enable comprehensive access logging"}'
|
||||
'{"id": "COMPLIANCE-002", "title": "PCI-DSS Violation", "description": "Cardholder data not properly encrypted", "severity": "critical", "category": "pci-dss", "remediation": "Implement encryption for all cardholder data"}'
|
||||
'{"id": "COMPLIANCE-003", "title": "GDPR Compliance Issue", "description": "Data retention policy not defined", "severity": "medium", "category": "gdpr", "remediation": "Define and implement data retention policies"}'
|
||||
)
|
||||
|
||||
for finding in "${findings[@]}"; do
|
||||
jq --argjson finding "$finding" '.findings += [$finding]' "$scan_file" > "${scan_file}.tmp" && mv "${scan_file}.tmp" "$scan_file"
|
||||
done
|
||||
}
|
||||
|
||||
# Check policy compliance
|
||||
cloud_security_check_policy() {
|
||||
local layer_name="$1"
|
||||
local policy_name="$2"
|
||||
local provider="$3"
|
||||
|
||||
log_info "Checking policy compliance for layer: $layer_name (Policy: $policy_name, Provider: $provider)" "apt-layer"
|
||||
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
local policies_dir="$cloud_security_dir/policies"
|
||||
local policy_file="$policies_dir/${policy_name}.json"
|
||||
|
||||
if [[ ! -f "$policy_file" ]]; then
|
||||
log_error "Policy file not found: $policy_file" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
local check_id="${layer_name//\//_}_${policy_name}_${timestamp}"
|
||||
local check_file="$cloud_security_dir/reports/${check_id}.json"
|
||||
|
||||
# Create policy check result
|
||||
local check_result=$(cat << 'EOF'
|
||||
{
|
||||
"check_id": "$check_id",
|
||||
"layer_name": "$layer_name",
|
||||
"policy_name": "$policy_name",
|
||||
"provider": "$provider",
|
||||
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"status": "completed",
|
||||
"compliance": true,
|
||||
"violations": [],
|
||||
"summary": {
|
||||
"total_rules": 0,
|
||||
"passed": 0,
|
||||
"failed": 0,
|
||||
"warnings": 0
|
||||
}
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
echo "$check_result" > "$check_file"
|
||||
|
||||
# Simulate policy violations
|
||||
local violations=(
|
||||
'{"rule": "least-privilege", "description": "IAM policy too permissive", "severity": "high", "remediation": "Restrict IAM permissions"}'
|
||||
'{"rule": "network-isolation", "description": "Public access not restricted", "severity": "medium", "remediation": "Configure private network access"}'
|
||||
)
|
||||
|
||||
local total_rules=5
|
||||
local passed=3
|
||||
local failed=2
|
||||
local warnings=0
|
||||
|
||||
for violation in "${violations[@]}"; do
|
||||
jq --argjson violation "$violation" '.violations += [$violation]' "$check_file" > "${check_file}.tmp" && mv "${check_file}.tmp" "$check_file"
|
||||
done
|
||||
|
||||
# Update compliance status
|
||||
if [[ $failed -gt 0 ]]; then
|
||||
jq '.compliance = false' "$check_file" > "${check_file}.tmp" && mv "${check_file}.tmp" "$check_file"
|
||||
fi
|
||||
|
||||
# Update summary
|
||||
jq --arg total "$total_rules" \
|
||||
--arg passed "$passed" \
|
||||
--arg failed "$failed" \
|
||||
--arg warnings "$warnings" \
|
||||
'.summary.total_rules = ($total | tonumber) |
|
||||
.summary.passed = ($passed | tonumber) |
|
||||
.summary.failed = ($failed | tonumber) |
|
||||
.summary.warnings = ($warnings | tonumber)' "$check_file" > "${check_file}.tmp" && mv "${check_file}.tmp" "$check_file"
|
||||
|
||||
log_info "Policy compliance check completed: $check_file" "apt-layer"
|
||||
log_info "Compliance: $([[ $failed -eq 0 ]] && echo "PASSED" || echo "FAILED") ($passed/$total_rules rules passed)" "apt-layer"
|
||||
|
||||
# Generate HTML report
|
||||
cloud_security_generate_policy_report "$check_file" "html"
|
||||
|
||||
echo "$check_file"
|
||||
}
|
||||
|
||||
# Generate security report
|
||||
cloud_security_generate_report() {
|
||||
local scan_file="$1"
|
||||
local format="${2:-html}"
|
||||
|
||||
if [[ ! -f "$scan_file" ]]; then
|
||||
log_error "Scan file not found: $scan_file" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
local reports_dir="$cloud_security_dir/reports"
|
||||
local scan_data=$(cat "$scan_file")
|
||||
|
||||
case "$format" in
|
||||
"html")
|
||||
local report_file="${scan_file%.json}.html"
|
||||
cloud_security_generate_html_report "$scan_data" "$report_file"
|
||||
log_info "HTML report generated: $report_file" "apt-layer"
|
||||
;;
|
||||
"json")
|
||||
# JSON report is already the scan file
|
||||
log_info "JSON report available: $scan_file" "apt-layer"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported report format: $format" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Generate HTML security report
|
||||
cloud_security_generate_html_report() {
|
||||
local scan_data="$1"
|
||||
local report_file="$2"
|
||||
|
||||
local layer_name=$(echo "$scan_data" | jq -r '.layer_name')
|
||||
local provider=$(echo "$scan_data" | jq -r '.provider')
|
||||
local timestamp=$(echo "$scan_data" | jq -r '.timestamp')
|
||||
local total_findings=$(echo "$scan_data" | jq -r '.summary.total_findings')
|
||||
local critical=$(echo "$scan_data" | jq -r '.summary.critical')
|
||||
local high=$(echo "$scan_data" | jq -r '.summary.high')
|
||||
local medium=$(echo "$scan_data" | jq -r '.summary.medium')
|
||||
local low=$(echo "$scan_data" | jq -r '.summary.low')
|
||||
|
||||
cat > "$report_file" << EOF
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Cloud Security Scan Report - $layer_name</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.header { background-color: #f0f0f0; padding: 20px; border-radius: 5px; }
|
||||
.summary { margin: 20px 0; }
|
||||
.finding { margin: 10px 0; padding: 10px; border-left: 4px solid #ccc; }
|
||||
.critical { border-left-color: #ff0000; background-color: #ffe6e6; }
|
||||
.high { border-left-color: #ff6600; background-color: #fff2e6; }
|
||||
.medium { border-left-color: #ffcc00; background-color: #fffbf0; }
|
||||
.low { border-left-color: #00cc00; background-color: #f0fff0; }
|
||||
.info { border-left-color: #0066cc; background-color: #f0f8ff; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>Cloud Security Scan Report</h1>
|
||||
<p><strong>Layer:</strong> $layer_name</p>
|
||||
<p><strong>Provider:</strong> $provider</p>
|
||||
<p><strong>Scan Time:</strong> $timestamp</p>
|
||||
</div>
|
||||
|
||||
<div class="summary">
|
||||
<h2>Summary</h2>
|
||||
<p><strong>Total Findings:</strong> $total_findings</p>
|
||||
<p><strong>Critical:</strong> $critical | <strong>High:</strong> $high | <strong>Medium:</strong> $medium | <strong>Low:</strong> $low</p>
|
||||
</div>
|
||||
|
||||
<div class="findings">
|
||||
<h2>Findings</h2>
|
||||
EOF
|
||||
|
||||
# Add findings
|
||||
echo "$scan_data" | jq -r '.findings[] | " <div class=\"finding \(.severity)\">" +
|
||||
"<h3>\(.title)</h3>" +
|
||||
"<p><strong>ID:</strong> \(.id)</p>" +
|
||||
"<p><strong>Severity:</strong> \(.severity)</p>" +
|
||||
"<p><strong>Category:</strong> \(.category)</p>" +
|
||||
"<p><strong>Description:</strong> \(.description)</p>" +
|
||||
"<p><strong>Remediation:</strong> \(.remediation)</p>" +
|
||||
"</div>"' >> "$report_file"
|
||||
|
||||
cat >> "$report_file" << EOF
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
}
|
||||
|
||||
# Generate policy compliance report
|
||||
cloud_security_generate_policy_report() {
|
||||
local check_file="$1"
|
||||
local format="${2:-html}"
|
||||
|
||||
if [[ ! -f "$check_file" ]]; then
|
||||
log_error "Policy check file not found: $check_file" "apt-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$format" in
|
||||
"html")
|
||||
local report_file="${check_file%.json}.html"
|
||||
local check_data=$(cat "$check_file")
|
||||
cloud_security_generate_policy_html_report "$check_data" "$report_file"
|
||||
log_info "Policy HTML report generated: $report_file" "apt-layer"
|
||||
;;
|
||||
"json")
|
||||
log_info "Policy JSON report available: $check_file" "apt-layer"
|
||||
;;
|
||||
*)
|
||||
log_error "Unsupported report format: $format" "apt-layer"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Generate HTML policy report
|
||||
cloud_security_generate_policy_html_report() {
|
||||
local check_data="$1"
|
||||
local report_file="$2"
|
||||
|
||||
local layer_name=$(echo "$check_data" | jq -r '.layer_name')
|
||||
local policy_name=$(echo "$check_data" | jq -r '.policy_name')
|
||||
local provider=$(echo "$check_data" | jq -r '.provider')
|
||||
local timestamp=$(echo "$check_data" | jq -r '.timestamp')
|
||||
local compliance=$(echo "$check_data" | jq -r '.compliance')
|
||||
local total_rules=$(echo "$check_data" | jq -r '.summary.total_rules')
|
||||
local passed=$(echo "$check_data" | jq -r '.summary.passed')
|
||||
local failed=$(echo "$check_data" | jq -r '.summary.failed')
|
||||
|
||||
cat > "$report_file" << EOF
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Policy Compliance Report - $layer_name</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.header { background-color: #f0f0f0; padding: 20px; border-radius: 5px; }
|
||||
.summary { margin: 20px 0; }
|
||||
.violation { margin: 10px 0; padding: 10px; border-left: 4px solid #ff0000; background-color: #ffe6e6; }
|
||||
.compliant { color: green; }
|
||||
.non-compliant { color: red; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>Policy Compliance Report</h1>
|
||||
<p><strong>Layer:</strong> $layer_name</p>
|
||||
<p><strong>Policy:</strong> $policy_name</p>
|
||||
<p><strong>Provider:</strong> $provider</p>
|
||||
<p><strong>Check Time:</strong> $timestamp</p>
|
||||
</div>
|
||||
|
||||
<div class="summary">
|
||||
<h2>Compliance Summary</h2>
|
||||
<p><strong>Status:</strong> <span class="$(if [[ "$compliance" == "true" ]]; then echo "compliant"; else echo "non-compliant"; fi)">$(if [[ "$compliance" == "true" ]]; then echo "COMPLIANT"; else echo "NON-COMPLIANT"; fi)</span></p>
|
||||
<p><strong>Rules:</strong> $passed/$total_rules passed ($failed failed)</p>
|
||||
</div>
|
||||
|
||||
<div class="violations">
|
||||
<h2>Policy Violations</h2>
|
||||
EOF
|
||||
|
||||
# Add violations
|
||||
echo "$check_data" | jq -r '.violations[] | " <div class=\"violation\">" +
|
||||
"<h3>\(.rule)</h3>" +
|
||||
"<p><strong>Severity:</strong> \(.severity)</p>" +
|
||||
"<p><strong>Description:</strong> \(.description)</p>" +
|
||||
"<p><strong>Remediation:</strong> \(.remediation)</p>" +
|
||||
"</div>"' >> "$report_file"
|
||||
|
||||
cat >> "$report_file" << EOF
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
}
|
||||
|
||||
# List security scans
|
||||
cloud_security_list_scans() {
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
local scans_dir="$cloud_security_dir/scans"
|
||||
|
||||
if [[ ! -d "$scans_dir" ]]; then
|
||||
log_info "No security scans found" "apt-layer"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Security scans:" "apt-layer"
|
||||
for scan_file in "$scans_dir"/*.json; do
|
||||
if [[ -f "$scan_file" ]]; then
|
||||
local scan_data=$(cat "$scan_file")
|
||||
local scan_id=$(echo "$scan_data" | jq -r '.scan_id')
|
||||
local layer_name=$(echo "$scan_data" | jq -r '.layer_name')
|
||||
local provider=$(echo "$scan_data" | jq -r '.provider')
|
||||
local timestamp=$(echo "$scan_data" | jq -r '.timestamp')
|
||||
local total_findings=$(echo "$scan_data" | jq -r '.summary.total_findings')
|
||||
local critical=$(echo "$scan_data" | jq -r '.summary.critical')
|
||||
|
||||
echo " $scan_id: $layer_name ($provider) - $total_findings findings ($critical critical) - $timestamp"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# List policy checks
|
||||
cloud_security_list_policy_checks() {
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
local reports_dir="$cloud_security_dir/reports"
|
||||
|
||||
if [[ ! -d "$reports_dir" ]]; then
|
||||
log_info "No policy checks found" "apt-layer"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Policy compliance checks:" "apt-layer"
|
||||
for check_file in "$reports_dir"/*.json; do
|
||||
if [[ -f "$check_file" ]]; then
|
||||
local check_data=$(cat "$check_file")
|
||||
local check_id=$(echo "$check_data" | jq -r '.check_id')
|
||||
local layer_name=$(echo "$check_data" | jq -r '.layer_name')
|
||||
local policy_name=$(echo "$check_data" | jq -r '.policy_name')
|
||||
local compliance=$(echo "$check_data" | jq -r '.compliance')
|
||||
local timestamp=$(echo "$check_data" | jq -r '.timestamp')
|
||||
|
||||
echo " $check_id: $layer_name ($policy_name) - $(if [[ "$compliance" == "true" ]]; then echo "COMPLIANT"; else echo "NON-COMPLIANT"; fi) - $timestamp"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Clean up old security reports
|
||||
cloud_security_cleanup() {
|
||||
local days="${1:-30}"
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
local scans_dir="$cloud_security_dir/scans"
|
||||
local reports_dir="$cloud_security_dir/reports"
|
||||
|
||||
log_info "Cleaning up security reports older than $days days..." "apt-layer"
|
||||
|
||||
local deleted_scans=0
|
||||
local deleted_reports=0
|
||||
|
||||
# Clean up scan files
|
||||
if [[ -d "$scans_dir" ]]; then
|
||||
while IFS= read -r -d '' file; do
|
||||
if [[ -f "$file" ]]; then
|
||||
rm "$file"
|
||||
((deleted_scans++))
|
||||
fi
|
||||
done < <(find "$scans_dir" -name "*.json" -mtime +$days -print0)
|
||||
fi
|
||||
|
||||
# Clean up report files
|
||||
if [[ -d "$reports_dir" ]]; then
|
||||
while IFS= read -r -d '' file; do
|
||||
if [[ -f "$file" ]]; then
|
||||
rm "$file"
|
||||
((deleted_reports++))
|
||||
fi
|
||||
done < <(find "$reports_dir" -name "*.json" -mtime +$days -print0)
|
||||
fi
|
||||
|
||||
log_info "Cleanup completed: $deleted_scans scan files, $deleted_reports report files deleted" "apt-layer"
|
||||
}
|
||||
|
||||
# Show cloud security status
|
||||
cloud_security_status() {
|
||||
local cloud_security_dir="${PARTICLE_WORKSPACE:-/var/lib/particle-os}/cloud-security"
|
||||
|
||||
log_info "Cloud Security System Status" "apt-layer"
|
||||
echo "=================================="
|
||||
|
||||
# Check if system is initialized
|
||||
if [[ -d "$cloud_security_dir" ]]; then
|
||||
echo "â System initialized: $cloud_security_dir"
|
||||
|
||||
# Check configuration
|
||||
local config_file="$cloud_security_dir/cloud-security-config.json"
|
||||
if [[ -f "$config_file" ]]; then
|
||||
echo "â Configuration: $config_file"
|
||||
local enabled_providers=$(jq -r '.enabled_providers[]' "$config_file" 2>/dev/null | tr '\n' ', ' | sed 's/,$//')
|
||||
echo " Enabled providers: $enabled_providers"
|
||||
else
|
||||
echo "â Configuration missing"
|
||||
fi
|
||||
|
||||
# Check directories
|
||||
local dirs=("scans" "policies" "reports" "integrations")
|
||||
for dir in "${dirs[@]}"; do
|
||||
if [[ -d "$cloud_security_dir/$dir" ]]; then
|
||||
echo "â $dir directory: $cloud_security_dir/$dir"
|
||||
else
|
||||
echo "â $dir directory missing"
|
||||
fi
|
||||
done
|
||||
|
||||
# Count files
|
||||
local scan_count=$(find "$cloud_security_dir/scans" -name "*.json" 2>/dev/null | wc -l)
|
||||
local policy_count=$(find "$cloud_security_dir/policies" -name "*.json" 2>/dev/null | wc -l)
|
||||
local report_count=$(find "$cloud_security_dir/reports" -name "*.json" 2>/dev/null | wc -l)
|
||||
|
||||
echo "ð Statistics:"
|
||||
echo " Security scans: $scan_count"
|
||||
echo " Policy files: $policy_count"
|
||||
echo " Compliance reports: $report_count"
|
||||
|
||||
else
|
||||
echo "â System not initialized"
|
||||
echo " Run 'cloud-security init' to initialize"
|
||||
fi
|
||||
}
|
||||
|
|
@ -1,393 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Hardware Detection and Auto-Configuration
|
||||
# Inspired by uBlue-OS akmods system
|
||||
# Automatically detects hardware and enables appropriate kernel modules
|
||||
|
||||
# Hardware detection functions
|
||||
detect_gpu() {
|
||||
log_info "Detecting GPU hardware..."
|
||||
|
||||
# Detect NVIDIA GPUs
|
||||
if lspci | grep -i nvidia > /dev/null 2>&1; then
|
||||
log_info "NVIDIA GPU detected"
|
||||
local nvidia_model=$(lspci | grep -i nvidia | head -1 | cut -d' ' -f4-)
|
||||
log_info "NVIDIA Model: $nvidia_model"
|
||||
|
||||
# Determine which NVIDIA driver to use based on hardware
|
||||
if echo "$nvidia_model" | grep -E "(RTX 50|RTX 40|RTX 30|RTX 20|GTX 16)" > /dev/null; then
|
||||
echo "nvidia-open"
|
||||
else
|
||||
echo "nvidia"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Detect AMD GPUs
|
||||
if lspci | grep -i amd > /dev/null 2>&1; then
|
||||
log_info "AMD GPU detected"
|
||||
local amd_model=$(lspci | grep -i amd | head -1 | cut -d' ' -f4-)
|
||||
log_info "AMD Model: $amd_model"
|
||||
echo "amd"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Detect Intel GPUs
|
||||
if lspci | grep -i intel > /dev/null 2>&1; then
|
||||
log_info "Intel GPU detected"
|
||||
local intel_model=$(lspci | grep -i intel | head -1 | cut -d' ' -f4-)
|
||||
log_info "Intel Model: $intel_model"
|
||||
echo "intel"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_warning "No dedicated GPU detected, using integrated graphics"
|
||||
echo "integrated"
|
||||
}
|
||||
|
||||
detect_cpu() {
|
||||
log_info "Detecting CPU hardware..."
|
||||
|
||||
# Detect AMD Ryzen CPUs
|
||||
if grep -i "amd" /proc/cpuinfo > /dev/null 2>&1; then
|
||||
local cpu_model=$(grep "model name" /proc/cpuinfo | head -1 | cut -d':' -f2 | xargs)
|
||||
log_info "AMD CPU detected: $cpu_model"
|
||||
|
||||
# Check for Ryzen SMU support
|
||||
if echo "$cpu_model" | grep -i "ryzen" > /dev/null; then
|
||||
log_info "AMD Ryzen CPU detected - enabling ryzen-smu support"
|
||||
echo "amd_ryzen"
|
||||
else
|
||||
echo "amd"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Detect Intel CPUs
|
||||
if grep -i "intel" /proc/cpuinfo > /dev/null 2>&1; then
|
||||
local cpu_model=$(grep "model name" /proc/cpuinfo | head -1 | cut -d':' -f2 | xargs)
|
||||
log_info "Intel CPU detected: $cpu_model"
|
||||
echo "intel"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_warning "Unknown CPU architecture"
|
||||
echo "unknown"
|
||||
}
|
||||
|
||||
detect_motherboard() {
|
||||
log_info "Detecting motherboard hardware..."
|
||||
|
||||
# Detect System76 hardware
|
||||
if dmidecode -s system-manufacturer 2>/dev/null | grep -i "system76" > /dev/null; then
|
||||
log_info "System76 hardware detected"
|
||||
echo "system76"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Detect GPD hardware
|
||||
if dmidecode -s system-product-name 2>/dev/null | grep -i "gpd" > /dev/null; then
|
||||
log_info "GPD hardware detected"
|
||||
echo "gpd"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Detect AMD B550 chipset
|
||||
if lspci | grep -i "nct6687" > /dev/null 2>&1; then
|
||||
log_info "AMD B550 chipset detected (NCT6687)"
|
||||
echo "amd_b550"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Standard motherboard detected"
|
||||
echo "standard"
|
||||
}
|
||||
|
||||
detect_storage() {
|
||||
log_info "Detecting storage hardware..."
|
||||
|
||||
# Check for ZFS pools
|
||||
if command -v zpool > /dev/null 2>&1 && zpool list > /dev/null 2>&1; then
|
||||
log_info "ZFS storage detected"
|
||||
echo "zfs"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check for Btrfs filesystems
|
||||
if findmnt -t btrfs > /dev/null 2>&1; then
|
||||
log_info "Btrfs storage detected"
|
||||
echo "btrfs"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Standard storage detected"
|
||||
echo "standard"
|
||||
}
|
||||
|
||||
detect_network() {
|
||||
log_info "Detecting network hardware..."
|
||||
|
||||
# Detect Intel NICs
|
||||
if lspci | grep -i "intel.*ethernet" > /dev/null 2>&1; then
|
||||
log_info "Intel network adapter detected"
|
||||
echo "intel_nic"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Detect Broadcom NICs
|
||||
if lspci | grep -i "broadcom.*ethernet" > /dev/null 2>&1; then
|
||||
log_info "Broadcom network adapter detected"
|
||||
echo "broadcom_nic"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Standard network adapter detected"
|
||||
echo "standard"
|
||||
}
|
||||
|
||||
# Auto-configure kernel modules based on detected hardware
|
||||
auto_configure_kernel_modules() {
|
||||
log_info "Auto-configuring kernel modules based on detected hardware..."
|
||||
|
||||
local config_file="/usr/local/etc/particle-os/kernel-modules.json"
|
||||
local temp_config="/tmp/kernel-modules-auto.json"
|
||||
|
||||
# Create backup of current configuration
|
||||
if [ -f "$config_file" ]; then
|
||||
cp "$config_file" "${config_file}.backup.$(date +%Y%m%d_%H%M%S)"
|
||||
fi
|
||||
|
||||
# Load current configuration
|
||||
if [ -f "$config_file" ]; then
|
||||
cp "$config_file" "$temp_config"
|
||||
else
|
||||
log_warning "Kernel modules configuration not found, creating default"
|
||||
cat > "$temp_config" << 'EOF'
|
||||
{
|
||||
"kernel_modules": {
|
||||
"common": {
|
||||
"description": "Common kernel modules for general hardware support",
|
||||
"modules": {
|
||||
"v4l2loopback": {
|
||||
"description": "Virtual video devices for screen recording and streaming",
|
||||
"package": "v4l2loopback-dkms",
|
||||
"kernel_args": [],
|
||||
"enabled": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"nvidia": {
|
||||
"description": "NVIDIA GPU driver support",
|
||||
"modules": {
|
||||
"nvidia": {
|
||||
"description": "NVIDIA closed proprietary drivers for legacy hardware",
|
||||
"package": "nvidia-driver-535",
|
||||
"kernel_args": ["nvidia-drm.modeset=1"],
|
||||
"enabled": false
|
||||
},
|
||||
"nvidia-open": {
|
||||
"description": "NVIDIA open source drivers for latest hardware",
|
||||
"package": "nvidia-driver-open-535",
|
||||
"kernel_args": ["nvidia-drm.modeset=1"],
|
||||
"enabled": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Detect hardware
|
||||
local gpu_type=$(detect_gpu)
|
||||
local cpu_type=$(detect_cpu)
|
||||
local motherboard_type=$(detect_motherboard)
|
||||
local storage_type=$(detect_storage)
|
||||
local network_type=$(detect_network)
|
||||
|
||||
log_info "Hardware detection results:"
|
||||
log_info " GPU: $gpu_type"
|
||||
log_info " CPU: $cpu_type"
|
||||
log_info " Motherboard: $motherboard_type"
|
||||
log_info " Storage: $storage_type"
|
||||
log_info " Network: $network_type"
|
||||
|
||||
# Enable appropriate modules based on hardware
|
||||
local changes_made=false
|
||||
|
||||
# GPU-specific modules
|
||||
case "$gpu_type" in
|
||||
"nvidia"|"nvidia-open")
|
||||
log_info "Enabling NVIDIA driver support"
|
||||
jq '.kernel_modules.nvidia.modules.nvidia.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
jq '.kernel_modules.nvidia.modules.nvidia-open.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
esac
|
||||
|
||||
# CPU-specific modules
|
||||
case "$cpu_type" in
|
||||
"amd_ryzen")
|
||||
log_info "Enabling AMD Ryzen SMU support"
|
||||
jq '.kernel_modules.common.modules.ryzen-smu.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
esac
|
||||
|
||||
# Motherboard-specific modules
|
||||
case "$motherboard_type" in
|
||||
"system76")
|
||||
log_info "Enabling System76 hardware support"
|
||||
jq '.kernel_modules.common.modules.system76.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
"gpd")
|
||||
log_info "Enabling GPD hardware support"
|
||||
jq '.kernel_modules.common.modules.gpd-fan-kmod.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
"amd_b550")
|
||||
log_info "Enabling AMD B550 chipset support"
|
||||
jq '.kernel_modules.common.modules.nct6687d.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
esac
|
||||
|
||||
# Storage-specific modules
|
||||
case "$storage_type" in
|
||||
"zfs")
|
||||
log_info "Enabling ZFS support"
|
||||
jq '.kernel_modules.storage.modules.zfs.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
"btrfs")
|
||||
log_info "Enabling Btrfs support"
|
||||
jq '.kernel_modules.storage.modules.btrfs.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
esac
|
||||
|
||||
# Network-specific modules
|
||||
case "$network_type" in
|
||||
"intel_nic")
|
||||
log_info "Enabling Intel NIC support"
|
||||
jq '.kernel_modules.network.modules.intel-nic.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
"broadcom_nic")
|
||||
log_info "Enabling Broadcom NIC support"
|
||||
jq '.kernel_modules.network.modules.broadcom-nic.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
;;
|
||||
esac
|
||||
|
||||
# Always enable v4l2loopback for general use
|
||||
log_info "Enabling v4l2loopback for screen recording support"
|
||||
jq '.kernel_modules.common.modules.v4l2loopback.enabled = true' "$temp_config" > "${temp_config}.tmp" && mv "${temp_config}.tmp" "$temp_config"
|
||||
changes_made=true
|
||||
|
||||
if [ "$changes_made" = true ]; then
|
||||
# Install the updated configuration
|
||||
mkdir -p "$(dirname "$config_file")"
|
||||
mv "$temp_config" "$config_file"
|
||||
log_success "Kernel modules auto-configured based on detected hardware"
|
||||
log_info "Configuration saved to: $config_file"
|
||||
|
||||
# Show enabled modules
|
||||
log_info "Enabled kernel modules:"
|
||||
jq -r '.kernel_modules | to_entries[] | .key as $category | .value.modules | to_entries[] | select(.value.enabled == true) | " \($category): \(.key) - \(.value.description)"' "$config_file" 2>/dev/null || log_warning "Could not parse enabled modules"
|
||||
else
|
||||
log_info "No hardware-specific modules needed, using default configuration"
|
||||
rm -f "$temp_config"
|
||||
fi
|
||||
}
|
||||
|
||||
# Install enabled kernel modules
|
||||
install_enabled_kernel_modules() {
|
||||
log_info "Installing enabled kernel modules..."
|
||||
|
||||
local config_file="/usr/local/etc/particle-os/kernel-modules.json"
|
||||
|
||||
if [ ! -f "$config_file" ]; then
|
||||
log_error "Kernel modules configuration not found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get list of enabled modules
|
||||
local enabled_modules=$(jq -r '.kernel_modules | to_entries[] | .value.modules | to_entries[] | select(.value.enabled == true) | .value.package' "$config_file" 2>/dev/null)
|
||||
|
||||
if [ -z "$enabled_modules" ]; then
|
||||
log_info "No kernel modules enabled in configuration"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Installing enabled kernel modules:"
|
||||
echo "$enabled_modules" | while read -r module_package; do
|
||||
if [ -n "$module_package" ]; then
|
||||
log_info " Installing: $module_package"
|
||||
apt-layer --dkms-install "$module_package" || log_warning "Failed to install $module_package"
|
||||
fi
|
||||
done
|
||||
|
||||
log_success "Kernel module installation completed"
|
||||
}
|
||||
|
||||
# Main hardware detection and configuration function
|
||||
configure_hardware_support() {
|
||||
log_info "Starting hardware detection and configuration..."
|
||||
|
||||
# Check if hardware detection is enabled
|
||||
local auto_detect=$(jq -r '.hardware_detection.auto_detect // true' "/usr/local/etc/particle-os/kernel-modules.json" 2>/dev/null)
|
||||
|
||||
if [ "$auto_detect" != "true" ]; then
|
||||
log_info "Hardware auto-detection is disabled"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Auto-configure kernel modules
|
||||
auto_configure_kernel_modules
|
||||
|
||||
# Install enabled modules if requested
|
||||
if [ "${1:-}" = "--install" ]; then
|
||||
install_enabled_kernel_modules
|
||||
fi
|
||||
|
||||
log_success "Hardware detection and configuration completed"
|
||||
}
|
||||
|
||||
# Hardware detection commands
|
||||
show_hardware_info() {
|
||||
log_info "Hardware Information:"
|
||||
echo "======================"
|
||||
|
||||
echo "GPU:"
|
||||
detect_gpu
|
||||
echo
|
||||
|
||||
echo "CPU:"
|
||||
detect_cpu
|
||||
echo
|
||||
|
||||
echo "Motherboard:"
|
||||
detect_motherboard
|
||||
echo
|
||||
|
||||
echo "Storage:"
|
||||
detect_storage
|
||||
echo
|
||||
|
||||
echo "Network:"
|
||||
detect_network
|
||||
echo
|
||||
}
|
||||
|
||||
# Export functions for use in main script
|
||||
export -f detect_gpu
|
||||
export -f detect_cpu
|
||||
export -f detect_motherboard
|
||||
export -f detect_storage
|
||||
export -f detect_network
|
||||
export -f auto_configure_kernel_modules
|
||||
export -f install_enabled_kernel_modules
|
||||
export -f configure_hardware_support
|
||||
export -f show_hardware_info
|
||||
|
|
@ -1,410 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Kernel Patching System for Ubuntu
|
||||
# Inspired by uBlue-OS but adapted for Ubuntu kernels
|
||||
# Handles downloading, applying, and managing kernel patches
|
||||
|
||||
# Load kernel patches configuration
|
||||
load_kernel_patches_config() {
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
|
||||
if [ ! -f "$config_file" ]; then
|
||||
log_error "Kernel patches configuration not found: $config_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate JSON configuration
|
||||
if ! jq empty "$config_file" 2>/dev/null; then
|
||||
log_error "Invalid JSON in kernel patches configuration"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Loaded kernel patches configuration from: $config_file"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Get current kernel version
|
||||
get_current_kernel_version() {
|
||||
local kernel_version=$(uname -r)
|
||||
local major_minor=$(echo "$kernel_version" | cut -d'-' -f1)
|
||||
|
||||
log_info "Current kernel version: $kernel_version (major.minor: $major_minor)"
|
||||
echo "$major_minor"
|
||||
}
|
||||
|
||||
# Check if patch is compatible with current kernel
|
||||
is_patch_compatible() {
|
||||
local patch_name="$1"
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
local current_kernel=$(get_current_kernel_version)
|
||||
|
||||
# Get supported kernel versions for this patch
|
||||
local supported_versions=$(jq -r ".kernel_patches | to_entries[] | .value.patches | to_entries[] | select(.key == \"$patch_name\") | .value.kernel_versions[]" "$config_file" 2>/dev/null)
|
||||
|
||||
if [ -z "$supported_versions" ]; then
|
||||
log_warning "Could not determine kernel version compatibility for patch: $patch_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if current kernel version is supported
|
||||
while IFS= read -r version; do
|
||||
if [ "$version" = "$current_kernel" ]; then
|
||||
log_info "Patch $patch_name is compatible with kernel $current_kernel"
|
||||
return 0
|
||||
fi
|
||||
done <<< "$supported_versions"
|
||||
|
||||
log_warning "Patch $patch_name is not compatible with kernel $current_kernel"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Download kernel patch
|
||||
download_kernel_patch() {
|
||||
local patch_name="$1"
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
local patch_dir="/var/lib/particle-os/kernel-patches"
|
||||
|
||||
# Create patch directory if it doesn't exist
|
||||
mkdir -p "$patch_dir"
|
||||
|
||||
# Get patch URL
|
||||
local patch_url=$(jq -r ".kernel_patches | to_entries[] | .value.patches | to_entries[] | select(.key == \"$patch_name\") | .value.url" "$config_file" 2>/dev/null)
|
||||
|
||||
if [ -z "$patch_url" ] || [ "$patch_url" = "null" ]; then
|
||||
log_error "Could not find URL for patch: $patch_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local patch_file="$patch_dir/${patch_name}.patch"
|
||||
|
||||
log_info "Downloading patch $patch_name from: $patch_url"
|
||||
|
||||
# Download patch with error handling
|
||||
if curl -L -o "$patch_file" "$patch_url" 2>/dev/null; then
|
||||
log_success "Downloaded patch: $patch_file"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to download patch: $patch_name"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Apply kernel patch
|
||||
apply_kernel_patch() {
|
||||
local patch_name="$1"
|
||||
local patch_dir="/var/lib/particle-os/kernel-patches"
|
||||
local patch_file="$patch_dir/${patch_name}.patch"
|
||||
local backup_dir="/var/lib/particle-os/kernel-patches/backup"
|
||||
|
||||
# Create backup directory
|
||||
mkdir -p "$backup_dir"
|
||||
|
||||
# Check if patch file exists
|
||||
if [ ! -f "$patch_file" ]; then
|
||||
log_error "Patch file not found: $patch_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Applying kernel patch: $patch_name"
|
||||
|
||||
# Create backup of current kernel configuration
|
||||
local backup_file="$backup_dir/kernel-config-$(date +%Y%m%d_%H%M%S).bak"
|
||||
if [ -f "/boot/config-$(uname -r)" ]; then
|
||||
cp "/boot/config-$(uname -r)" "$backup_file"
|
||||
log_info "Created kernel config backup: $backup_file"
|
||||
fi
|
||||
|
||||
# Apply patch using Ubuntu's patch method
|
||||
local apply_method=$(jq -r '.patch_application.ubuntu_specific.apply_method' "/usr/local/etc/particle-os/kernel-patches.json" 2>/dev/null)
|
||||
if [ -z "$apply_method" ] || [ "$apply_method" = "null" ]; then
|
||||
apply_method="patch -p1"
|
||||
fi
|
||||
|
||||
# Note: In a real implementation, this would apply to kernel source
|
||||
# For now, we'll simulate the patch application
|
||||
log_info "Would apply patch using: $apply_method < $patch_file"
|
||||
log_info "Patch application simulated (requires kernel source tree)"
|
||||
|
||||
# In a real implementation, this would be:
|
||||
# cd /usr/src/linux-source-$(uname -r)
|
||||
# $apply_method < "$patch_file"
|
||||
|
||||
log_success "Kernel patch $patch_name applied successfully"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Get kernel arguments for a patch
|
||||
get_patch_kernel_args() {
|
||||
local patch_name="$1"
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
|
||||
# Get kernel arguments for this patch
|
||||
local kernel_args=$(jq -r ".kernel_patches | to_entries[] | .value.patches | to_entries[] | select(.key == \"$patch_name\") | .value.kernel_args[]" "$config_file" 2>/dev/null)
|
||||
|
||||
if [ -n "$kernel_args" ] && [ "$kernel_args" != "null" ]; then
|
||||
echo "$kernel_args"
|
||||
fi
|
||||
}
|
||||
|
||||
# List available patches
|
||||
list_available_patches() {
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
local current_kernel=$(get_current_kernel_version)
|
||||
|
||||
log_info "Available kernel patches for kernel $current_kernel:"
|
||||
echo "=================================================="
|
||||
|
||||
# Iterate through patch categories
|
||||
jq -r '.kernel_patches | to_entries[] | .key as $category | .value.patches | to_entries[] | "\($category): \(.key) - \(.value.description) [enabled: \(.value.enabled)]"' "$config_file" 2>/dev/null | while IFS= read -r line; do
|
||||
if [ -n "$line" ] && [ "$line" != "null" ]; then
|
||||
echo " $line"
|
||||
fi
|
||||
done
|
||||
|
||||
echo
|
||||
log_info "Use 'apt-layer --apply-patch <patch-name>' to apply a specific patch"
|
||||
}
|
||||
|
||||
# List enabled patches
|
||||
list_enabled_patches() {
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
|
||||
log_info "Enabled kernel patches:"
|
||||
echo "========================="
|
||||
|
||||
# Get enabled patches
|
||||
jq -r '.kernel_patches | to_entries[] | .key as $category | .value.patches | to_entries[] | select(.value.enabled == true) | "\($category): \(.key) - \(.value.description)"' "$config_file" 2>/dev/null | while IFS= read -r line; do
|
||||
if [ -n "$line" ] && [ "$line" != "null" ]; then
|
||||
echo " $line"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Enable a patch
|
||||
enable_patch() {
|
||||
local patch_name="$1"
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
local temp_config="/tmp/kernel-patches-enable.json"
|
||||
|
||||
# Check if patch exists
|
||||
if ! jq -e ".kernel_patches | to_entries[] | .value.patches | has(\"$patch_name\")" "$config_file" > /dev/null 2>&1; then
|
||||
log_error "Patch not found: $patch_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check kernel compatibility
|
||||
if ! is_patch_compatible "$patch_name"; then
|
||||
log_error "Patch $patch_name is not compatible with current kernel"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Enable the patch
|
||||
jq ".kernel_patches | to_entries[] | .value.patches[\"$patch_name\"].enabled = true" "$config_file" > "$temp_config" 2>/dev/null
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
# Update the configuration file
|
||||
mv "$temp_config" "$config_file"
|
||||
log_success "Enabled patch: $patch_name"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to enable patch: $patch_name"
|
||||
rm -f "$temp_config"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Disable a patch
|
||||
disable_patch() {
|
||||
local patch_name="$1"
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
local temp_config="/tmp/kernel-patches-disable.json"
|
||||
|
||||
# Check if patch exists
|
||||
if ! jq -e ".kernel_patches | to_entries[] | .value.patches | has(\"$patch_name\")" "$config_file" > /dev/null 2>&1; then
|
||||
log_error "Patch not found: $patch_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Disable the patch
|
||||
jq ".kernel_patches | to_entries[] | .value.patches[\"$patch_name\"].enabled = false" "$config_file" > "$temp_config" 2>/dev/null
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
# Update the configuration file
|
||||
mv "$temp_config" "$config_file"
|
||||
log_success "Disabled patch: $patch_name"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to disable patch: $patch_name"
|
||||
rm -f "$temp_config"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Apply all enabled patches
|
||||
apply_enabled_patches() {
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
local auto_apply=$(jq -r '.patch_application.auto_apply // false' "$config_file" 2>/dev/null)
|
||||
|
||||
if [ "$auto_apply" != "true" ]; then
|
||||
log_info "Auto-apply is disabled, patches must be applied manually"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Applying all enabled kernel patches..."
|
||||
|
||||
# Get list of enabled patches
|
||||
local enabled_patches=$(jq -r '.kernel_patches | to_entries[] | .value.patches | to_entries[] | select(.value.enabled == true) | .key' "$config_file" 2>/dev/null)
|
||||
|
||||
if [ -z "$enabled_patches" ]; then
|
||||
log_info "No patches are currently enabled"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local applied_count=0
|
||||
local failed_count=0
|
||||
|
||||
# Apply each enabled patch
|
||||
while IFS= read -r patch_name; do
|
||||
if [ -n "$patch_name" ] && [ "$patch_name" != "null" ]; then
|
||||
log_info "Processing patch: $patch_name"
|
||||
|
||||
# Check compatibility
|
||||
if ! is_patch_compatible "$patch_name"; then
|
||||
log_warning "Skipping incompatible patch: $patch_name"
|
||||
((failed_count++))
|
||||
continue
|
||||
fi
|
||||
|
||||
# Download patch
|
||||
if download_kernel_patch "$patch_name"; then
|
||||
# Apply patch
|
||||
if apply_kernel_patch "$patch_name"; then
|
||||
log_success "Successfully applied patch: $patch_name"
|
||||
((applied_count++))
|
||||
else
|
||||
log_error "Failed to apply patch: $patch_name"
|
||||
((failed_count++))
|
||||
fi
|
||||
else
|
||||
log_error "Failed to download patch: $patch_name"
|
||||
((failed_count++))
|
||||
fi
|
||||
fi
|
||||
done <<< "$enabled_patches"
|
||||
|
||||
log_info "Patch application completed: $applied_count applied, $failed_count failed"
|
||||
|
||||
if [ $failed_count -gt 0 ]; then
|
||||
return 1
|
||||
else
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Update kernel arguments for applied patches
|
||||
update_kernel_arguments() {
|
||||
local config_file="/usr/local/etc/particle-os/kernel-patches.json"
|
||||
local kernel_args_file="/etc/default/grub"
|
||||
|
||||
log_info "Updating kernel arguments for applied patches..."
|
||||
|
||||
# Get all kernel arguments from enabled patches
|
||||
local all_kernel_args=$(jq -r '.kernel_patches | to_entries[] | .value.patches | to_entries[] | select(.value.enabled == true) | .value.kernel_args[]' "$config_file" 2>/dev/null)
|
||||
|
||||
if [ -z "$all_kernel_args" ]; then
|
||||
log_info "No kernel arguments to update"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Build new GRUB_CMDLINE_LINUX_DEFAULT
|
||||
local new_args=""
|
||||
while IFS= read -r arg; do
|
||||
if [ -n "$arg" ] && [ "$arg" != "null" ]; then
|
||||
if [ -z "$new_args" ]; then
|
||||
new_args="$arg"
|
||||
else
|
||||
new_args="$new_args $arg"
|
||||
fi
|
||||
fi
|
||||
done <<< "$all_kernel_args"
|
||||
|
||||
log_info "New kernel arguments: $new_args"
|
||||
|
||||
# In a real implementation, this would update GRUB configuration
|
||||
# For now, we'll just log what would be done
|
||||
log_info "Would update $kernel_args_file with new kernel arguments"
|
||||
log_info "Would run: update-grub to apply changes"
|
||||
|
||||
log_success "Kernel arguments updated successfully"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Main kernel patching function
|
||||
manage_kernel_patches() {
|
||||
local action="$1"
|
||||
local patch_name="$2"
|
||||
|
||||
# Load configuration
|
||||
if ! load_kernel_patches_config; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
case "$action" in
|
||||
"list")
|
||||
list_available_patches
|
||||
;;
|
||||
"list-enabled")
|
||||
list_enabled_patches
|
||||
;;
|
||||
"enable")
|
||||
if [ -z "$patch_name" ]; then
|
||||
log_error "Patch name required for enable action"
|
||||
return 1
|
||||
fi
|
||||
enable_patch "$patch_name"
|
||||
;;
|
||||
"disable")
|
||||
if [ -z "$patch_name" ]; then
|
||||
log_error "Patch name required for disable action"
|
||||
return 1
|
||||
fi
|
||||
disable_patch "$patch_name"
|
||||
;;
|
||||
"apply")
|
||||
if [ -n "$patch_name" ]; then
|
||||
# Apply specific patch
|
||||
if download_kernel_patch "$patch_name" && apply_kernel_patch "$patch_name"; then
|
||||
log_success "Applied patch: $patch_name"
|
||||
else
|
||||
log_error "Failed to apply patch: $patch_name"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
# Apply all enabled patches
|
||||
apply_enabled_patches
|
||||
fi
|
||||
;;
|
||||
"update-args")
|
||||
update_kernel_arguments
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown action: $action"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Export functions for use in main script
|
||||
export -f load_kernel_patches_config
|
||||
export -f get_current_kernel_version
|
||||
export -f is_patch_compatible
|
||||
export -f download_kernel_patch
|
||||
export -f apply_kernel_patch
|
||||
export -f get_patch_kernel_args
|
||||
export -f list_available_patches
|
||||
export -f list_enabled_patches
|
||||
export -f enable_patch
|
||||
export -f disable_patch
|
||||
export -f apply_enabled_patches
|
||||
export -f update_kernel_arguments
|
||||
export -f manage_kernel_patches
|
||||
|
|
@ -12,17 +12,9 @@ apt-layer:
|
|||
- composefs
|
||||
- container
|
||||
- live-overlay
|
||||
- dkms
|
||||
- nvidia
|
||||
- rpm-ostree-compat
|
||||
- atomic-transactions
|
||||
- layer-signing
|
||||
- audit-reporting
|
||||
- security-scanning
|
||||
- multi-tenant
|
||||
- enterprise-integration
|
||||
- cloud-integration
|
||||
- kubernetes-integration
|
||||
- dpkg-direct-install
|
||||
EOF
|
||||
}
|
||||
|
||||
|
|
@ -52,16 +44,7 @@ Layer Management:
|
|||
--live-commit Commit live overlay changes
|
||||
--live-rollback Rollback live overlay changes
|
||||
|
||||
DKMS & NVIDIA Support:
|
||||
--dkms-status Show DKMS module status
|
||||
--dkms-install Install DKMS module
|
||||
--dkms-remove Remove DKMS module
|
||||
--dkms-rebuild Rebuild DKMS module
|
||||
--dkms-rebuild-all Rebuild all DKMS modules
|
||||
--nvidia-install Install NVIDIA drivers
|
||||
--nvidia-status Show NVIDIA driver status
|
||||
--gpu-switch Switch GPU (integrated/nvidia/auto)
|
||||
--nvidia-prime-configure Configure NVIDIA Prime
|
||||
|
||||
|
||||
Image Management:
|
||||
--list List available images
|
||||
|
|
@ -193,175 +176,20 @@ IMAGE MANAGEMENT:
|
|||
apt-layer --oci-status
|
||||
# Show OCI integration system status
|
||||
|
||||
ADVANCED PACKAGE MANAGEMENT:
|
||||
apt-layer --advanced-install packages
|
||||
# Install packages with security checks and dependency resolution
|
||||
SYSTEM MANAGEMENT:
|
||||
apt-layer --init
|
||||
# Initialize Particle-OS system
|
||||
|
||||
apt-layer --advanced-remove packages
|
||||
# Remove packages with dependency checking and safety validation
|
||||
apt-layer --reset
|
||||
# Reset Particle-OS system
|
||||
|
||||
apt-layer --advanced-update packages
|
||||
# Update packages with rollback capability and backup creation
|
||||
|
||||
apt-layer --add-user username role
|
||||
# Add user to package management system with specified role
|
||||
|
||||
apt-layer --remove-user username
|
||||
# Remove user from package management system
|
||||
|
||||
apt-layer --list-users
|
||||
# List all package management users and roles
|
||||
|
||||
apt-layer --package-info package
|
||||
# Get detailed information about a package
|
||||
|
||||
apt-layer --package-status
|
||||
# Show advanced package management system status
|
||||
|
||||
apt-layer --list-backups
|
||||
# List all package backups
|
||||
|
||||
apt-layer --cleanup-backups [days]
|
||||
# Clean up backups older than specified days (default: 30)
|
||||
|
||||
DKMS & NVIDIA SUPPORT:
|
||||
apt-layer --dkms-status
|
||||
# Show DKMS module status and configuration
|
||||
|
||||
apt-layer --dkms-install module-name version
|
||||
# Install DKMS module with atomic transaction support
|
||||
|
||||
apt-layer --dkms-remove module-name version
|
||||
# Remove DKMS module with rollback capability
|
||||
|
||||
apt-layer --dkms-rebuild module-name version [kernel-version]
|
||||
# Rebuild DKMS module for specific kernel version
|
||||
|
||||
apt-layer --dkms-rebuild-all [kernel-version]
|
||||
# Rebuild all installed DKMS modules
|
||||
|
||||
apt-layer --dkms-list
|
||||
# List all installed DKMS modules
|
||||
|
||||
apt-layer --nvidia-install [driver-version]
|
||||
# Install NVIDIA drivers using graphics-drivers PPA (auto-detects optimal version)
|
||||
|
||||
apt-layer --nvidia-status
|
||||
# Show NVIDIA driver status and GPU information
|
||||
|
||||
apt-layer --gpu-switch gpu-type
|
||||
# Switch GPU using NVIDIA Prime (integrated/nvidia/auto)
|
||||
|
||||
apt-layer --nvidia-prime-configure
|
||||
# Configure NVIDIA Prime for GPU switching
|
||||
|
||||
LAYER SIGNING & VERIFICATION:
|
||||
apt-layer --generate-key key-name type
|
||||
# Generate signing key pair (sigstore, gpg)
|
||||
|
||||
apt-layer --sign-layer layer path key-name
|
||||
# Sign layer with specified key
|
||||
|
||||
apt-layer --verify-layer layer path
|
||||
# Verify layer signature
|
||||
|
||||
apt-layer --revoke-layer layer path [reason]
|
||||
# Revoke layer (mark as untrusted)
|
||||
|
||||
apt-layer --list-keys
|
||||
# List all signing keys
|
||||
|
||||
apt-layer --list-signatures
|
||||
# List all layer signatures
|
||||
|
||||
apt-layer --layer-status layer path
|
||||
# Show layer signing status
|
||||
|
||||
AUDIT & COMPLIANCE:
|
||||
apt-layer --query-audit format [filters...]
|
||||
# Query audit logs with filters (json, csv, table)
|
||||
|
||||
apt-layer --export-audit format [output-file] [filters...]
|
||||
# Export audit logs to file (json, csv, html)
|
||||
|
||||
apt-layer --generate-compliance-report framework [period] [format]
|
||||
# Generate compliance report (sox, pci-dss)
|
||||
|
||||
apt-layer --list-audit-reports
|
||||
# List all audit reports
|
||||
|
||||
apt-layer --audit-status
|
||||
# Show audit system status
|
||||
|
||||
apt-layer --cleanup-audit-logs [days]
|
||||
# Clean up old audit logs (default: 90 days)
|
||||
|
||||
SECURITY SCANNING:
|
||||
apt-layer --scan-package package-name [version] [scan-level]
|
||||
# Scan package for vulnerabilities (standard, thorough, quick)
|
||||
|
||||
apt-layer --scan-layer layer-path [scan-level]
|
||||
# Scan layer for vulnerabilities
|
||||
|
||||
apt-layer --generate-security-report type [format] [scan-level]
|
||||
# Generate security report (package, layer, system)
|
||||
|
||||
apt-layer --security-status
|
||||
# Show security scanning system status
|
||||
|
||||
apt-layer --update-cve-database
|
||||
# Update CVE database from NVD
|
||||
|
||||
apt-layer --cleanup-security-reports [days]
|
||||
# Clean up old security reports (default: 90 days)
|
||||
|
||||
ADMIN UTILITIES:
|
||||
apt-layer admin health
|
||||
# System health check and diagnostics
|
||||
|
||||
apt-layer admin perf
|
||||
# Performance analytics and resource usage
|
||||
|
||||
apt-layer admin cleanup
|
||||
# Maintenance cleanup
|
||||
|
||||
apt-layer admin backup
|
||||
# Backup configs and layers
|
||||
|
||||
apt-layer admin restore
|
||||
# Restore from backup
|
||||
|
||||
ENTERPRISE FEATURES:
|
||||
apt-layer tenant action [options]
|
||||
# Multi-tenant management
|
||||
|
||||
apt-layer compliance action [options]
|
||||
# Compliance framework management
|
||||
|
||||
apt-layer enterprise action [options]
|
||||
# Enterprise integration
|
||||
|
||||
apt-layer monitoring action [options]
|
||||
# Monitoring and alerting
|
||||
|
||||
CLOUD INTEGRATION:
|
||||
apt-layer cloud action [options]
|
||||
# Cloud provider integration (AWS, Azure, GCP)
|
||||
|
||||
apt-layer kubernetes action [options]
|
||||
# Kubernetes integration (EKS, AKS, GKE, OpenShift)
|
||||
|
||||
apt-layer orchestration action [options]
|
||||
# Container orchestration
|
||||
|
||||
apt-layer multicloud action [options]
|
||||
# Multi-cloud deployment
|
||||
|
||||
apt-layer cloud-security action [options]
|
||||
# Cloud-native security
|
||||
|
||||
For category-specific help: apt-layer category --help
|
||||
For examples: apt-layer --examples
|
||||
EXAMPLES:
|
||||
apt-layer ubuntu-ublue/base/24.04 ubuntu-ublue/gaming/24.04 steam wine
|
||||
apt-layer --container ubuntu-ublue/base/24.04 ubuntu-ublue/dev/24.04 vscode git
|
||||
apt-layer --dpkg-install curl wget
|
||||
apt-layer --live-install firefox
|
||||
apt-layer install steam wine
|
||||
apt-layer status
|
||||
EOF
|
||||
}
|
||||
|
||||
|
|
@ -495,11 +323,12 @@ Examples:
|
|||
EOF
|
||||
}
|
||||
|
||||
# Show image management help
|
||||
show_image_help() {
|
||||
cat << 'EOF'
|
||||
Image Management Commands
|
||||
IMAGE MANAGEMENT COMMANDS:
|
||||
|
||||
BASIC IMAGE OPERATIONS:
|
||||
IMAGE OPERATIONS:
|
||||
apt-layer --list
|
||||
# List all available ComposeFS images/layers
|
||||
|
||||
|
|
@ -519,12 +348,11 @@ OCI INTEGRATION:
|
|||
apt-layer --oci-status
|
||||
# Show OCI integration system status
|
||||
|
||||
Examples:
|
||||
EXAMPLES:
|
||||
apt-layer --list
|
||||
apt-layer --info ubuntu-ublue/gaming/24.04
|
||||
apt-layer --remove old-image
|
||||
apt-layer --oci-export ubuntu-ublue/gaming/24.04 ubuntu-ublue/gaming:latest
|
||||
apt-layer --oci-import ubuntu:24.04 ubuntu-ublue/base/24.04
|
||||
apt-layer --info ubuntu-ublue/base/24.04
|
||||
apt-layer --remove old-layer
|
||||
apt-layer --oci-export my-image oci:my-registry/my-image:latest
|
||||
EOF
|
||||
}
|
||||
|
||||
|
|
@ -737,53 +565,6 @@ Examples:
|
|||
EOF
|
||||
}
|
||||
|
||||
show_dkms_help() {
|
||||
cat << 'EOF'
|
||||
DKMS & NVIDIA Support Commands
|
||||
|
||||
DKMS MODULE MANAGEMENT:
|
||||
apt-layer --dkms-status
|
||||
# Show DKMS module status and configuration
|
||||
|
||||
apt-layer --dkms-install module-name version
|
||||
# Install DKMS module with atomic transaction support
|
||||
|
||||
apt-layer --dkms-remove module-name version
|
||||
# Remove DKMS module with rollback capability
|
||||
|
||||
apt-layer --dkms-rebuild module-name version [kernel-version]
|
||||
# Rebuild DKMS module for specific kernel version
|
||||
|
||||
apt-layer --dkms-rebuild-all [kernel-version]
|
||||
# Rebuild all installed DKMS modules
|
||||
|
||||
apt-layer --dkms-list
|
||||
# List all installed DKMS modules
|
||||
|
||||
NVIDIA DRIVER SUPPORT:
|
||||
apt-layer --nvidia-install [driver-version]
|
||||
# Install NVIDIA drivers using graphics-drivers PPA
|
||||
# Auto-detects optimal driver version if not specified
|
||||
|
||||
apt-layer --nvidia-status
|
||||
# Show NVIDIA driver status and GPU information
|
||||
|
||||
apt-layer --gpu-switch gpu-type
|
||||
# Switch GPU using NVIDIA Prime
|
||||
# Options: integrated, nvidia, auto
|
||||
|
||||
apt-layer --nvidia-prime-configure
|
||||
# Configure NVIDIA Prime for GPU switching
|
||||
|
||||
Examples:
|
||||
apt-layer --dkms-install nvidia-driver 535
|
||||
apt-layer --dkms-rebuild virtualbox-dkms 6.1.38
|
||||
apt-layer --nvidia-install auto
|
||||
apt-layer --gpu-switch nvidia
|
||||
apt-layer --dkms-status
|
||||
EOF
|
||||
}
|
||||
|
||||
# Show examples
|
||||
show_examples() {
|
||||
cat << 'EOF'
|
||||
|
|
@ -834,48 +615,6 @@ IMAGE MANAGEMENT:
|
|||
|
||||
# Export as OCI image
|
||||
apt-layer --oci-export ubuntu-ublue/gaming/24.04 ubuntu-ublue/gaming:latest
|
||||
|
||||
ADVANCED FEATURES:
|
||||
# Generate signing key
|
||||
apt-layer --generate-key my-key sigstore
|
||||
|
||||
# Sign layer
|
||||
apt-layer --sign-layer layer.squashfs my-key
|
||||
|
||||
# Security scan
|
||||
apt-layer --scan-package firefox
|
||||
|
||||
# System health check
|
||||
apt-layer admin health
|
||||
|
||||
DKMS & NVIDIA SUPPORT:
|
||||
# Install NVIDIA drivers
|
||||
apt-layer --nvidia-install auto
|
||||
|
||||
# Install DKMS module
|
||||
apt-layer --dkms-install virtualbox-dkms 6.1.38
|
||||
|
||||
# Rebuild DKMS modules after kernel update
|
||||
apt-layer --dkms-rebuild-all
|
||||
|
||||
# Switch to NVIDIA GPU
|
||||
apt-layer --gpu-switch nvidia
|
||||
|
||||
# Check DKMS status
|
||||
apt-layer --dkms-status
|
||||
|
||||
ENTERPRISE FEATURES:
|
||||
# Create tenant
|
||||
apt-layer tenant create my-org
|
||||
|
||||
# Enable compliance framework
|
||||
apt-layer compliance enable SOX
|
||||
|
||||
# Cloud deployment
|
||||
apt-layer cloud deploy ubuntu-ublue/gaming/24.04 aws ecr
|
||||
|
||||
# Kubernetes deployment
|
||||
apt-layer kubernetes deploy ubuntu-ublue/gaming/24.04 gaming-ns
|
||||
EOF
|
||||
}
|
||||
|
||||
|
|
@ -1078,12 +817,6 @@ main() {
|
|||
exit 0
|
||||
fi
|
||||
;;
|
||||
dkms)
|
||||
if [[ "${2:-}" == "--help" || "${2:-}" == "-h" ]]; then
|
||||
show_dkms_help
|
||||
exit 0
|
||||
fi
|
||||
;;
|
||||
security)
|
||||
if [[ "${2:-}" == "--help" || "${2:-}" == "-h" ]]; then
|
||||
show_security_help
|
||||
|
|
|
|||
|
|
@ -1,207 +0,0 @@
|
|||
# Particle-OS apt-layer Test Scripts
|
||||
|
||||
This directory contains comprehensive test scripts for validating Particle-OS apt-layer functionality, including package management, layer creation, and atomic transactions.
|
||||
|
||||
## Test Scripts Overview
|
||||
|
||||
### Core Functionality Tests
|
||||
|
||||
#### `test-apt-layer-basic.sh`
|
||||
**Purpose**: Validates core apt-layer functionality and basic operations.
|
||||
|
||||
**Tests**:
|
||||
- Help system and command validation
|
||||
- System status and health checks
|
||||
- Image and layer listing functionality
|
||||
- Base image creation with multi-layer support
|
||||
- Package installation and management
|
||||
- Layer creation from base images
|
||||
- Image mounting and content access
|
||||
- DKMS functionality and NVIDIA support
|
||||
- Cleanup and maintenance operations
|
||||
- Image removal and cleanup
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
sudo ./test-apt-layer-basic.sh
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- Particle-OS tools installed
|
||||
- Root privileges
|
||||
- apt-layer script available at `/usr/local/bin/apt-layer.sh`
|
||||
|
||||
## Test Environment Setup
|
||||
|
||||
### Prerequisites
|
||||
1. **Install Particle-OS Tools**:
|
||||
```bash
|
||||
sudo ./install-particle-os.sh
|
||||
```
|
||||
|
||||
2. **Install Required Packages**:
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install squashfs-tools jq coreutils util-linux
|
||||
```
|
||||
|
||||
### Test Execution
|
||||
|
||||
#### Running Individual Tests
|
||||
```bash
|
||||
# Basic functionality test
|
||||
sudo ./test-scripts/test-apt-layer-basic.sh
|
||||
```
|
||||
|
||||
#### Running All Tests
|
||||
```bash
|
||||
# Run all tests sequentially
|
||||
for test in test-scripts/test-*.sh; do
|
||||
echo "Running $test..."
|
||||
sudo "$test"
|
||||
echo "Completed $test"
|
||||
echo "---"
|
||||
done
|
||||
```
|
||||
|
||||
## Test Results Interpretation
|
||||
|
||||
### Success Criteria
|
||||
- **Basic Tests**: All core functionality working correctly
|
||||
- **Package Management**: Package installation and layer creation working
|
||||
- **DKMS Tests**: DKMS functionality available and working
|
||||
- **NVIDIA Tests**: NVIDIA support available and working
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
#### Permission Denied Errors
|
||||
```
|
||||
[ERROR] Permission denied
|
||||
```
|
||||
**Solution**: Ensure running with root privileges
|
||||
```bash
|
||||
sudo ./test-script.sh
|
||||
```
|
||||
|
||||
#### apt-layer Script Not Found
|
||||
```
|
||||
[ERROR] apt-layer script not found
|
||||
```
|
||||
**Solution**: Install Particle-OS tools
|
||||
```bash
|
||||
sudo ./install-particle-os.sh
|
||||
```
|
||||
|
||||
#### Package Installation Failures
|
||||
```
|
||||
[WARNING] Package installation test failed
|
||||
```
|
||||
**Solution**: Check network connectivity and package availability
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install curl wget
|
||||
```
|
||||
|
||||
## Test Customization
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# Set backend preference
|
||||
export UBLUE_COMPOSEFS_BACKEND="erofs" # or "squashfs" or "auto"
|
||||
|
||||
# Set compression method
|
||||
export UBLUE_SQUASHFS_COMPRESSION="lz4" # or "xz" or "gzip"
|
||||
|
||||
# Run test with custom settings
|
||||
sudo UBLUE_COMPOSEFS_BACKEND="erofs" ./test-apt-layer-basic.sh
|
||||
```
|
||||
|
||||
### Test Parameters
|
||||
Each test script can be customized by modifying:
|
||||
- Test package lists
|
||||
- Layer creation parameters
|
||||
- Performance thresholds
|
||||
- Test duration and iterations
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
### Automated Testing
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Example CI/CD test script for apt-layer
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Run all tests and collect results
|
||||
test_results=()
|
||||
for test in test-scripts/test-*.sh; do
|
||||
if sudo "$test"; then
|
||||
test_results+=("PASS: $(basename "$test")")
|
||||
else
|
||||
test_results+=("FAIL: $(basename "$test")")
|
||||
fi
|
||||
done
|
||||
|
||||
# Report results
|
||||
echo "Test Results:"
|
||||
for result in "${test_results[@]}"; do
|
||||
echo " $result"
|
||||
done
|
||||
|
||||
# Exit with failure if any test failed
|
||||
if [[ " ${test_results[*]} " =~ " FAIL: " ]]; then
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Debug Mode
|
||||
Enable verbose output for debugging:
|
||||
```bash
|
||||
# Set debug environment
|
||||
export PARTICLE_DEBUG=1
|
||||
sudo ./test-scripts/test-apt-layer-basic.sh
|
||||
```
|
||||
|
||||
### Log Analysis
|
||||
Check logs for detailed error information:
|
||||
```bash
|
||||
# View apt-layer logs
|
||||
sudo tail -f /var/log/particle-os/apt-layer.log
|
||||
|
||||
# View system logs
|
||||
sudo journalctl -f -u particle-os
|
||||
```
|
||||
|
||||
### System Requirements Verification
|
||||
```bash
|
||||
# Check system requirements
|
||||
sudo ./test-scripts/test-apt-layer-basic.sh 2>&1 | grep -E "(ERROR|WARNING|REQUIRED)"
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
### Adding New Tests
|
||||
1. **Follow naming convention**: `test-<feature>-<type>.sh`
|
||||
2. **Include proper cleanup**: Use trap handlers for cleanup
|
||||
3. **Add documentation**: Update this README with new test details
|
||||
4. **Test thoroughly**: Validate on multiple systems
|
||||
|
||||
### Test Standards
|
||||
- **Error handling**: Comprehensive error checking and reporting
|
||||
- **Cleanup**: Proper resource cleanup in all scenarios
|
||||
- **Documentation**: Clear test purpose and requirements
|
||||
- **Portability**: Work across different Ubuntu versions
|
||||
|
||||
## Support
|
||||
|
||||
For issues with test scripts:
|
||||
1. Check the troubleshooting section above
|
||||
2. Review system requirements and prerequisites
|
||||
3. Check Particle-OS documentation
|
||||
4. Report issues with detailed system information
|
||||
|
||||
---
|
||||
|
||||
**Note**: These test scripts are designed to validate Particle-OS apt-layer functionality and help ensure system reliability. Regular testing is recommended for development and deployment environments.
|
||||
|
|
@ -1,332 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
################################################################################################################
|
||||
# #
|
||||
# Particle-OS apt-layer Basic Test Script #
|
||||
# Tests basic apt-layer functionality including layer creation, package management, and atomic transactions #
|
||||
# #
|
||||
################################################################################################################
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# Configuration
|
||||
APT_LAYER_SCRIPT="/usr/local/bin/apt-layer.sh"
|
||||
TEST_DIR="/tmp/particle-os-apt-layer-basic-test-$$"
|
||||
TEST_BASE_IMAGE="test-base-image"
|
||||
TEST_APP_IMAGE="test-app-image"
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
log_info "Cleaning up test environment..."
|
||||
|
||||
# Remove test images
|
||||
"$APT_LAYER_SCRIPT" remove "$TEST_APP_IMAGE" 2>/dev/null || true
|
||||
"$APT_LAYER_SCRIPT" remove "$TEST_BASE_IMAGE" 2>/dev/null || true
|
||||
|
||||
# Remove test directory
|
||||
rm -rf "$TEST_DIR" 2>/dev/null || true
|
||||
|
||||
log_info "Cleanup completed"
|
||||
}
|
||||
|
||||
# Set up trap for cleanup
|
||||
trap cleanup EXIT INT TERM
|
||||
|
||||
# Test functions
|
||||
test_apt_layer_help() {
|
||||
log_info "Testing apt-layer help system..."
|
||||
|
||||
if "$APT_LAYER_SCRIPT" help >/dev/null 2>&1; then
|
||||
log_success "Help system test passed"
|
||||
return 0
|
||||
else
|
||||
log_error "Help system test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
test_apt_layer_status() {
|
||||
log_info "Testing apt-layer status..."
|
||||
|
||||
if "$APT_LAYER_SCRIPT" status >/dev/null 2>&1; then
|
||||
log_success "Status test passed"
|
||||
return 0
|
||||
else
|
||||
log_error "Status test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
test_apt_layer_list_commands() {
|
||||
log_info "Testing apt-layer listing commands..."
|
||||
|
||||
# Test list-images
|
||||
if "$APT_LAYER_SCRIPT" list-images >/dev/null 2>&1; then
|
||||
log_success "list-images test passed"
|
||||
else
|
||||
log_error "list-images test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test list-layers
|
||||
if "$APT_LAYER_SCRIPT" list-layers >/dev/null 2>&1; then
|
||||
log_success "list-layers test passed"
|
||||
else
|
||||
log_error "list-layers test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test list-mounts
|
||||
if "$APT_LAYER_SCRIPT" list-mounts >/dev/null 2>&1; then
|
||||
log_success "list-mounts test passed"
|
||||
else
|
||||
log_error "list-mounts test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
test_apt_layer_base_image_creation() {
|
||||
log_info "Testing apt-layer base image creation..."
|
||||
|
||||
# Create test source directory
|
||||
local test_source="$TEST_DIR/base"
|
||||
mkdir -p "$test_source"
|
||||
|
||||
# Add some basic content
|
||||
echo "Base system content" > "$test_source/base.txt"
|
||||
mkdir -p "$test_source/etc"
|
||||
echo "base_config=value" > "$test_source/etc/base.conf"
|
||||
|
||||
# Create base image
|
||||
if "$APT_LAYER_SCRIPT" create "$TEST_BASE_IMAGE" "$test_source"; then
|
||||
log_success "Base image creation test passed"
|
||||
return 0
|
||||
else
|
||||
log_error "Base image creation test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
test_apt_layer_package_installation() {
|
||||
log_info "Testing apt-layer package installation..."
|
||||
|
||||
# Test installing a simple package (using a mock approach)
|
||||
# In a real test, this would use actual package names
|
||||
local test_packages=("curl" "wget")
|
||||
|
||||
for package in "${test_packages[@]}"; do
|
||||
log_info "Testing package installation: $package"
|
||||
|
||||
# Check if package is available (mock test)
|
||||
if apt-cache show "$package" >/dev/null 2>&1; then
|
||||
log_success "Package $package is available"
|
||||
else
|
||||
log_warning "Package $package not available (skipping)"
|
||||
continue
|
||||
fi
|
||||
done
|
||||
|
||||
log_success "Package installation test completed"
|
||||
return 0
|
||||
}
|
||||
|
||||
test_apt_layer_layer_creation() {
|
||||
log_info "Testing apt-layer layer creation..."
|
||||
|
||||
# Create a new layer based on the base image
|
||||
local test_packages=("curl")
|
||||
|
||||
if "$APT_LAYER_SCRIPT" install "$TEST_BASE_IMAGE" "$TEST_APP_IMAGE" "${test_packages[@]}"; then
|
||||
log_success "Layer creation test passed"
|
||||
return 0
|
||||
else
|
||||
log_warning "Layer creation test failed (may be expected in test environment)"
|
||||
return 0 # Not a critical failure in test environment
|
||||
fi
|
||||
}
|
||||
|
||||
test_apt_layer_image_mounting() {
|
||||
log_info "Testing apt-layer image mounting..."
|
||||
|
||||
# Create mount point
|
||||
local mount_point="$TEST_DIR/mount"
|
||||
mkdir -p "$mount_point"
|
||||
|
||||
# Try to mount the app image
|
||||
if "$APT_LAYER_SCRIPT" mount "$TEST_APP_IMAGE" "$mount_point"; then
|
||||
log_success "Image mounting test passed"
|
||||
|
||||
# Test content access
|
||||
if [[ -f "$mount_point/base.txt" ]]; then
|
||||
log_success "Content access test passed"
|
||||
else
|
||||
log_warning "Content access test failed"
|
||||
fi
|
||||
|
||||
# Unmount
|
||||
"$APT_LAYER_SCRIPT" unmount "$mount_point"
|
||||
return 0
|
||||
else
|
||||
log_warning "Image mounting test failed (may be expected in test environment)"
|
||||
return 0 # Not a critical failure in test environment
|
||||
fi
|
||||
}
|
||||
|
||||
test_apt_layer_cleanup() {
|
||||
log_info "Testing apt-layer cleanup..."
|
||||
|
||||
if "$APT_LAYER_SCRIPT" cleanup >/dev/null 2>&1; then
|
||||
log_success "Cleanup test passed"
|
||||
return 0
|
||||
else
|
||||
log_warning "Cleanup test failed (may be normal if no unreferenced layers)"
|
||||
return 0 # Not a critical failure
|
||||
fi
|
||||
}
|
||||
|
||||
test_apt_layer_image_removal() {
|
||||
log_info "Testing apt-layer image removal..."
|
||||
|
||||
# Remove test images
|
||||
if "$APT_LAYER_SCRIPT" remove "$TEST_APP_IMAGE"; then
|
||||
log_success "App image removal test passed"
|
||||
else
|
||||
log_warning "App image removal test failed"
|
||||
fi
|
||||
|
||||
if "$APT_LAYER_SCRIPT" remove "$TEST_BASE_IMAGE"; then
|
||||
log_success "Base image removal test passed"
|
||||
return 0
|
||||
else
|
||||
log_error "Base image removal test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
test_apt_layer_dkms_functionality() {
|
||||
log_info "Testing apt-layer DKMS functionality..."
|
||||
|
||||
# Test DKMS status
|
||||
if "$APT_LAYER_SCRIPT" dkms-status >/dev/null 2>&1; then
|
||||
log_success "DKMS status test passed"
|
||||
else
|
||||
log_warning "DKMS status test failed (may be expected if no DKMS modules)"
|
||||
fi
|
||||
|
||||
# Test DKMS list
|
||||
if "$APT_LAYER_SCRIPT" dkms-list >/dev/null 2>&1; then
|
||||
log_success "DKMS list test passed"
|
||||
else
|
||||
log_warning "DKMS list test failed (may be expected if no DKMS modules)"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
test_apt_layer_nvidia_functionality() {
|
||||
log_info "Testing apt-layer NVIDIA functionality..."
|
||||
|
||||
# Test NVIDIA status
|
||||
if "$APT_LAYER_SCRIPT" nvidia-status >/dev/null 2>&1; then
|
||||
log_success "NVIDIA status test passed"
|
||||
else
|
||||
log_warning "NVIDIA status test failed (may be expected if no NVIDIA hardware)"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Main test execution
|
||||
main() {
|
||||
log_info "Starting Particle-OS apt-layer Basic Tests"
|
||||
log_info "Test directory: $TEST_DIR"
|
||||
log_info "apt-layer script: $APT_LAYER_SCRIPT"
|
||||
|
||||
# Check if apt-layer script exists
|
||||
if [[ ! -x "$APT_LAYER_SCRIPT" ]]; then
|
||||
log_error "apt-layer script not found: $APT_LAYER_SCRIPT"
|
||||
log_info "Please install Particle-OS tools first"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create test directory
|
||||
mkdir -p "$TEST_DIR"
|
||||
|
||||
# Test counter
|
||||
local total_tests=0
|
||||
local passed_tests=0
|
||||
local failed_tests=0
|
||||
|
||||
# Run tests
|
||||
local tests=(
|
||||
"test_apt_layer_help"
|
||||
"test_apt_layer_status"
|
||||
"test_apt_layer_list_commands"
|
||||
"test_apt_layer_base_image_creation"
|
||||
"test_apt_layer_package_installation"
|
||||
"test_apt_layer_layer_creation"
|
||||
"test_apt_layer_image_mounting"
|
||||
"test_apt_layer_cleanup"
|
||||
"test_apt_layer_image_removal"
|
||||
"test_apt_layer_dkms_functionality"
|
||||
"test_apt_layer_nvidia_functionality"
|
||||
)
|
||||
|
||||
for test_func in "${tests[@]}"; do
|
||||
total_tests=$((total_tests + 1))
|
||||
log_info "Running test: $test_func"
|
||||
|
||||
if "$test_func"; then
|
||||
passed_tests=$((passed_tests + 1))
|
||||
log_success "Test passed: $test_func"
|
||||
else
|
||||
failed_tests=$((failed_tests + 1))
|
||||
log_error "Test failed: $test_func"
|
||||
fi
|
||||
|
||||
echo
|
||||
done
|
||||
|
||||
# Summary
|
||||
log_info "Test Summary:"
|
||||
log_info " Total tests: $total_tests"
|
||||
log_info " Passed: $passed_tests"
|
||||
log_info " Failed: $failed_tests"
|
||||
|
||||
if [[ $failed_tests -eq 0 ]]; then
|
||||
log_success "All tests passed! apt-layer basic functionality is working correctly."
|
||||
exit 0
|
||||
else
|
||||
log_warning "Some tests failed. Check the output above for details."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
|
|
@ -1,63 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Test script for Bazzite-style status output in Particle-OS bootc-alternative
|
||||
# This demonstrates the new deployment tracking functionality
|
||||
|
||||
set -e
|
||||
|
||||
echo "=== Particle-OS Bazzite-Style Status Test ==="
|
||||
echo "Testing the new deployment tracking functionality"
|
||||
echo ""
|
||||
|
||||
# Initialize deployment tracking
|
||||
echo "1. Initializing deployment tracking..."
|
||||
./bootc-alternative.sh init-deployment
|
||||
echo ""
|
||||
|
||||
# Show initial status
|
||||
echo "2. Initial status (should show 'unknown' for current deployment):"
|
||||
./bootc-alternative.sh status
|
||||
echo ""
|
||||
|
||||
# Stage a test deployment
|
||||
echo "3. Staging a test deployment..."
|
||||
./bootc-alternative.sh stage "ghcr.io/particle-os/baryon:stable" "sha256:test123456789" "41.20250127.1"
|
||||
echo ""
|
||||
|
||||
# Show status after staging
|
||||
echo "4. Status after staging (should show staged deployment):"
|
||||
./bootc-alternative.sh status
|
||||
echo ""
|
||||
|
||||
# Deploy the staged deployment
|
||||
echo "5. Deploying staged deployment..."
|
||||
./bootc-alternative.sh deploy
|
||||
echo ""
|
||||
|
||||
# Show status after deployment
|
||||
echo "6. Status after deployment (should show booted deployment):"
|
||||
./bootc-alternative.sh status
|
||||
echo ""
|
||||
|
||||
# Stage another deployment
|
||||
echo "7. Staging another deployment..."
|
||||
./bootc-alternative.sh stage "ghcr.io/particle-os/baryon:stable" "sha256:test987654321" "41.20250127.2"
|
||||
echo ""
|
||||
|
||||
# Show status with both current and staged
|
||||
echo "8. Status with current and staged deployments:"
|
||||
./bootc-alternative.sh status
|
||||
echo ""
|
||||
|
||||
# Show JSON status
|
||||
echo "9. JSON status output:"
|
||||
./bootc-alternative.sh status-json
|
||||
echo ""
|
||||
|
||||
echo "=== Test Complete ==="
|
||||
echo "The status output should now match the Bazzite-style format:"
|
||||
echo "- Staged image: (if any)"
|
||||
echo "- ● Booted image: (current)"
|
||||
echo "- Rollback image: (if any)"
|
||||
echo ""
|
||||
echo "Each with digest, version, and timestamp information."
|
||||
|
|
@ -1,399 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Particle-OS DKMS Functionality Test Script
|
||||
# Tests all DKMS features implemented in apt-layer
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# Test configuration
|
||||
TEST_MODULE="test-dkms-module"
|
||||
TEST_VERSION="1.0.0"
|
||||
TEST_KERNEL="$(uname -r)"
|
||||
|
||||
# Check if running as root
|
||||
check_root() {
|
||||
if [[ $EUID -ne 0 ]]; then
|
||||
log_error "This script must be run as root for DKMS testing"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if apt-layer is available
|
||||
check_apt_layer() {
|
||||
if ! command -v apt-layer &> /dev/null; then
|
||||
log_error "apt-layer command not found. Please install Particle-OS tools first."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if DKMS is available
|
||||
check_dkms() {
|
||||
if ! command -v dkms &> /dev/null; then
|
||||
log_warning "DKMS not found. Installing DKMS..."
|
||||
apt update
|
||||
apt install -y dkms
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 1: DKMS Status Command
|
||||
test_dkms_status() {
|
||||
log_info "Test 1: Testing DKMS status command"
|
||||
|
||||
if apt-layer --dkms-status; then
|
||||
log_success "DKMS status command works"
|
||||
return 0
|
||||
else
|
||||
log_error "DKMS status command failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 2: DKMS List Command
|
||||
test_dkms_list() {
|
||||
log_info "Test 2: Testing DKMS list command"
|
||||
|
||||
if apt-layer --dkms-list; then
|
||||
log_success "DKMS list command works"
|
||||
return 0
|
||||
else
|
||||
log_error "DKMS list command failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 3: Create Test DKMS Module
|
||||
create_test_dkms_module() {
|
||||
log_info "Test 3: Creating test DKMS module"
|
||||
|
||||
local test_dir="/tmp/test-dkms-module"
|
||||
local dkms_dir="/usr/src/${TEST_MODULE}-${TEST_VERSION}"
|
||||
|
||||
# Create test module directory
|
||||
mkdir -p "$test_dir"
|
||||
cd "$test_dir"
|
||||
|
||||
# Create simple test module
|
||||
cat > "test_module.c" << 'EOF'
|
||||
#include <linux/module.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Particle-OS Test");
|
||||
MODULE_DESCRIPTION("Test DKMS module for Particle-OS");
|
||||
MODULE_VERSION("1.0.0");
|
||||
|
||||
static int __init test_init(void) {
|
||||
printk(KERN_INFO "Test DKMS module loaded\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit test_exit(void) {
|
||||
printk(KERN_INFO "Test DKMS module unloaded\n");
|
||||
}
|
||||
|
||||
module_init(test_init);
|
||||
module_exit(test_exit);
|
||||
EOF
|
||||
|
||||
# Create Makefile
|
||||
cat > "Makefile" << EOF
|
||||
obj-m += test_module.o
|
||||
|
||||
all:
|
||||
make -C /lib/modules/\$(shell uname -r)/build M=\$(PWD) modules
|
||||
|
||||
clean:
|
||||
make -C /lib/modules/\$(shell uname -r)/build M=\$(PWD) clean
|
||||
EOF
|
||||
|
||||
# Create dkms.conf
|
||||
cat > "dkms.conf" << EOF
|
||||
PACKAGE_NAME="test-dkms-module"
|
||||
PACKAGE_VERSION="1.0.0"
|
||||
BUILT_MODULE_NAME[0]="test_module"
|
||||
DEST_MODULE_LOCATION[0]="/kernel/drivers/misc"
|
||||
AUTOINSTALL="yes"
|
||||
EOF
|
||||
|
||||
# Copy to DKMS source directory
|
||||
cp -r "$test_dir" "$dkms_dir"
|
||||
|
||||
log_success "Test DKMS module created at $dkms_dir"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 4: Install DKMS Module
|
||||
test_dkms_install() {
|
||||
log_info "Test 4: Testing DKMS module installation"
|
||||
|
||||
if apt-layer --dkms-install "$TEST_MODULE" "$TEST_VERSION"; then
|
||||
log_success "DKMS module installation works"
|
||||
return 0
|
||||
else
|
||||
log_error "DKMS module installation failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 5: Verify DKMS Module Installation
|
||||
test_dkms_verify_installation() {
|
||||
log_info "Test 5: Verifying DKMS module installation"
|
||||
|
||||
# Check if module is listed in DKMS
|
||||
if dkms status | grep -q "$TEST_MODULE/$TEST_VERSION"; then
|
||||
log_success "DKMS module found in status"
|
||||
else
|
||||
log_error "DKMS module not found in status"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if module is loaded
|
||||
if lsmod | grep -q "test_module"; then
|
||||
log_success "DKMS module is loaded"
|
||||
else
|
||||
log_warning "DKMS module is not loaded (this is normal for test modules)"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 6: Rebuild DKMS Module
|
||||
test_dkms_rebuild() {
|
||||
log_info "Test 6: Testing DKMS module rebuild"
|
||||
|
||||
if apt-layer --dkms-rebuild "$TEST_MODULE" "$TEST_VERSION" "$TEST_KERNEL"; then
|
||||
log_success "DKMS module rebuild works"
|
||||
return 0
|
||||
else
|
||||
log_error "DKMS module rebuild failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 7: Rebuild All DKMS Modules
|
||||
test_dkms_rebuild_all() {
|
||||
log_info "Test 7: Testing rebuild all DKMS modules"
|
||||
|
||||
if apt-layer --dkms-rebuild-all "$TEST_KERNEL"; then
|
||||
log_success "DKMS rebuild all works"
|
||||
return 0
|
||||
else
|
||||
log_error "DKMS rebuild all failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 8: Remove DKMS Module
|
||||
test_dkms_remove() {
|
||||
log_info "Test 8: Testing DKMS module removal"
|
||||
|
||||
if apt-layer --dkms-remove "$TEST_MODULE" "$TEST_VERSION"; then
|
||||
log_success "DKMS module removal works"
|
||||
return 0
|
||||
else
|
||||
log_error "DKMS module removal failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 9: Verify DKMS Module Removal
|
||||
test_dkms_verify_removal() {
|
||||
log_info "Test 9: Verifying DKMS module removal"
|
||||
|
||||
# Check if module is no longer listed in DKMS
|
||||
if ! dkms status | grep -q "$TEST_MODULE/$TEST_VERSION"; then
|
||||
log_success "DKMS module successfully removed"
|
||||
return 0
|
||||
else
|
||||
log_error "DKMS module still found in status"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 10: NVIDIA Status Command
|
||||
test_nvidia_status() {
|
||||
log_info "Test 10: Testing NVIDIA status command"
|
||||
|
||||
if apt-layer --nvidia-status; then
|
||||
log_success "NVIDIA status command works"
|
||||
return 0
|
||||
else
|
||||
log_warning "NVIDIA status command failed (may not have NVIDIA hardware)"
|
||||
return 0 # Not a failure if no NVIDIA hardware
|
||||
fi
|
||||
}
|
||||
|
||||
# Test 11: GPU Switch Command
|
||||
test_gpu_switch() {
|
||||
log_info "Test 11: Testing GPU switch command"
|
||||
|
||||
# Test with integrated GPU
|
||||
if apt-layer --gpu-switch integrated; then
|
||||
log_success "GPU switch to integrated works"
|
||||
else
|
||||
log_warning "GPU switch to integrated failed (may not have dual GPU)"
|
||||
fi
|
||||
|
||||
# Test with NVIDIA GPU
|
||||
if apt-layer --gpu-switch nvidia; then
|
||||
log_success "GPU switch to NVIDIA works"
|
||||
else
|
||||
log_warning "GPU switch to NVIDIA failed (may not have NVIDIA GPU)"
|
||||
fi
|
||||
|
||||
return 0 # Not a failure if no dual GPU setup
|
||||
}
|
||||
|
||||
# Test 12: NVIDIA Prime Configuration
|
||||
test_nvidia_prime_configure() {
|
||||
log_info "Test 12: Testing NVIDIA Prime configuration"
|
||||
|
||||
if apt-layer --nvidia-prime-configure; then
|
||||
log_success "NVIDIA Prime configuration works"
|
||||
return 0
|
||||
else
|
||||
log_warning "NVIDIA Prime configuration failed (may not have NVIDIA hardware)"
|
||||
return 0 # Not a failure if no NVIDIA hardware
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
log_info "Cleaning up test environment..."
|
||||
|
||||
# Remove test module if it exists
|
||||
if dkms status | grep -q "$TEST_MODULE/$TEST_VERSION"; then
|
||||
apt-layer --dkms-remove "$TEST_MODULE" "$TEST_VERSION" || true
|
||||
fi
|
||||
|
||||
# Remove test module directory
|
||||
rm -rf "/usr/src/${TEST_MODULE}-${TEST_VERSION}" || true
|
||||
rm -rf "/tmp/test-dkms-module" || true
|
||||
|
||||
log_success "Cleanup completed"
|
||||
}
|
||||
|
||||
# Main test function
|
||||
run_tests() {
|
||||
local test_results=()
|
||||
local test_count=0
|
||||
local passed_count=0
|
||||
local failed_count=0
|
||||
|
||||
log_info "Starting Particle-OS DKMS functionality tests..."
|
||||
echo "=================================================="
|
||||
|
||||
# Pre-test checks
|
||||
check_root
|
||||
check_apt_layer
|
||||
check_dkms
|
||||
|
||||
# Run tests
|
||||
local tests=(
|
||||
"test_dkms_status"
|
||||
"test_dkms_list"
|
||||
"create_test_dkms_module"
|
||||
"test_dkms_install"
|
||||
"test_dkms_verify_installation"
|
||||
"test_dkms_rebuild"
|
||||
"test_dkms_rebuild_all"
|
||||
"test_dkms_remove"
|
||||
"test_dkms_verify_removal"
|
||||
"test_nvidia_status"
|
||||
"test_gpu_switch"
|
||||
"test_nvidia_prime_configure"
|
||||
)
|
||||
|
||||
for test_func in "${tests[@]}"; do
|
||||
((test_count++))
|
||||
log_info "Running test $test_count: $test_func"
|
||||
|
||||
if $test_func; then
|
||||
test_results+=("✅ $test_func")
|
||||
((passed_count++))
|
||||
else
|
||||
test_results+=("❌ $test_func")
|
||||
((failed_count++))
|
||||
fi
|
||||
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Print results
|
||||
echo "=================================================="
|
||||
log_info "Test Results Summary:"
|
||||
echo "Total tests: $test_count"
|
||||
echo "Passed: $passed_count"
|
||||
echo "Failed: $failed_count"
|
||||
echo ""
|
||||
|
||||
log_info "Detailed Results:"
|
||||
for result in "${test_results[@]}"; do
|
||||
echo " $result"
|
||||
done
|
||||
|
||||
echo ""
|
||||
if [[ $failed_count -eq 0 ]]; then
|
||||
log_success "All DKMS tests passed! 🎉"
|
||||
return 0
|
||||
else
|
||||
log_error "Some DKMS tests failed. Please check the output above."
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Handle script interruption
|
||||
trap cleanup EXIT
|
||||
|
||||
# Parse command line arguments
|
||||
case "${1:-}" in
|
||||
--help|-h)
|
||||
echo "Particle-OS DKMS Functionality Test Script"
|
||||
echo ""
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --help, -h Show this help message"
|
||||
echo " --cleanup Run cleanup only"
|
||||
echo ""
|
||||
echo "This script tests all DKMS functionality implemented in Particle-OS apt-layer."
|
||||
echo "Must be run as root."
|
||||
exit 0
|
||||
;;
|
||||
--cleanup)
|
||||
cleanup
|
||||
exit 0
|
||||
;;
|
||||
"")
|
||||
run_tests
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown option: $1"
|
||||
echo "Use --help for usage information"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
|
@ -1,242 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Test Official ComposeFS Package Installation and Functionality
|
||||
# This script tests the newly available official ComposeFS package in Debian
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# Test configuration
|
||||
TEST_DIR="/tmp/particle-os-composefs-test"
|
||||
TEST_IMAGE="test-official-composefs"
|
||||
TEST_MOUNT="/tmp/composefs-test-mount"
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
log_info "Cleaning up test environment..."
|
||||
|
||||
# Unmount if mounted
|
||||
if mountpoint -q "$TEST_MOUNT" 2>/dev/null; then
|
||||
sudo umount "$TEST_MOUNT" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Remove test directories
|
||||
rm -rf "$TEST_DIR" 2>/dev/null || true
|
||||
rm -rf "$TEST_MOUNT" 2>/dev/null || true
|
||||
|
||||
log_info "Cleanup completed"
|
||||
}
|
||||
|
||||
# Set up trap for cleanup
|
||||
trap cleanup EXIT
|
||||
|
||||
# Main test function
|
||||
main() {
|
||||
log_info "Starting Official ComposeFS Package Test"
|
||||
log_info "========================================"
|
||||
|
||||
# Check if running as root
|
||||
if [[ $EUID -ne 0 ]]; then
|
||||
log_error "This script must be run as root (use sudo)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Phase 1: Check package availability
|
||||
log_info "Phase 1: Checking package availability"
|
||||
echo "----------------------------------------"
|
||||
|
||||
# Update package list
|
||||
log_info "Updating package list..."
|
||||
apt update
|
||||
|
||||
# Check if composefs-tools package is available
|
||||
log_info "Checking for composefs-tools package..."
|
||||
if apt-cache search composefs-tools | grep -q composefs-tools; then
|
||||
log_success "composefs-tools package found in repositories"
|
||||
else
|
||||
log_warning "composefs-tools package not found in repositories"
|
||||
log_info "This is expected if the package hasn't propagated yet"
|
||||
log_info "Checking for alternative package names..."
|
||||
|
||||
# Check for alternative package names
|
||||
if apt-cache search composefs | grep -q composefs; then
|
||||
log_info "Found composefs-related packages:"
|
||||
apt-cache search composefs
|
||||
else
|
||||
log_warning "No composefs packages found in repositories"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Phase 2: Install package (if available)
|
||||
log_info ""
|
||||
log_info "Phase 2: Installing composefs-tools package"
|
||||
echo "---------------------------------------------"
|
||||
|
||||
# Try to install the package
|
||||
if apt-cache search composefs-tools | grep -q composefs-tools; then
|
||||
log_info "Installing composefs-tools package..."
|
||||
if apt install -y composefs-tools; then
|
||||
log_success "composefs-tools package installed successfully"
|
||||
else
|
||||
log_error "Failed to install composefs-tools package"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_warning "Skipping package installation (package not available)"
|
||||
log_info "This test will continue with source-built tools if available"
|
||||
fi
|
||||
|
||||
# Phase 3: Check tool availability
|
||||
log_info ""
|
||||
log_info "Phase 3: Checking tool availability"
|
||||
echo "-------------------------------------"
|
||||
|
||||
# Check for mkcomposefs
|
||||
if command -v mkcomposefs >/dev/null 2>&1; then
|
||||
log_success "mkcomposefs found: $(which mkcomposefs)"
|
||||
mkcomposefs --version 2>/dev/null || log_info "mkcomposefs version: available"
|
||||
else
|
||||
log_warning "mkcomposefs not found"
|
||||
fi
|
||||
|
||||
# Check for mount.composefs
|
||||
if command -v mount.composefs >/dev/null 2>&1; then
|
||||
log_success "mount.composefs found: $(which mount.composefs)"
|
||||
mount.composefs --help 2>/dev/null | head -5 || log_info "mount.composefs help: available"
|
||||
else
|
||||
log_warning "mount.composefs not found"
|
||||
fi
|
||||
|
||||
# Check for fsverity
|
||||
if command -v fsverity >/dev/null 2>&1; then
|
||||
log_success "fsverity found: $(which fsverity)"
|
||||
else
|
||||
log_warning "fsverity not found (optional for integrity verification)"
|
||||
fi
|
||||
|
||||
# Phase 4: Test Particle-OS integration
|
||||
log_info ""
|
||||
log_info "Phase 4: Testing Particle-OS integration"
|
||||
echo "------------------------------------------"
|
||||
|
||||
# Check if Particle-OS composefs script exists
|
||||
if [[ -f "/usr/local/bin/composefs-alternative.sh" ]]; then
|
||||
log_success "Particle-OS composefs script found"
|
||||
|
||||
# Test official status command
|
||||
log_info "Testing official status command..."
|
||||
if /usr/local/bin/composefs-alternative.sh official-status; then
|
||||
log_success "Official status command works"
|
||||
else
|
||||
log_warning "Official status command failed"
|
||||
fi
|
||||
else
|
||||
log_warning "Particle-OS composefs script not found at /usr/local/bin/composefs-alternative.sh"
|
||||
fi
|
||||
|
||||
# Phase 5: Test basic functionality (if tools available)
|
||||
log_info ""
|
||||
log_info "Phase 5: Testing basic functionality"
|
||||
echo "-------------------------------------"
|
||||
|
||||
if command -v mkcomposefs >/dev/null 2>&1 && command -v mount.composefs >/dev/null 2>&1; then
|
||||
log_info "Creating test environment..."
|
||||
|
||||
# Create test directories
|
||||
mkdir -p "$TEST_DIR"
|
||||
mkdir -p "$TEST_MOUNT"
|
||||
|
||||
# Create test content
|
||||
log_info "Creating test content..."
|
||||
echo "Hello from Official ComposeFS!" > "$TEST_DIR/test.txt"
|
||||
mkdir -p "$TEST_DIR/testdir"
|
||||
echo "Test file in subdirectory" > "$TEST_DIR/testdir/subfile.txt"
|
||||
|
||||
# Create ComposeFS image
|
||||
log_info "Creating ComposeFS image..."
|
||||
if mkcomposefs --content-dir="$TEST_DIR" --metadata-tree="$TEST_DIR.cfs"; then
|
||||
log_success "ComposeFS image created successfully"
|
||||
|
||||
# Mount ComposeFS image
|
||||
log_info "Mounting ComposeFS image..."
|
||||
if mount.composefs "$TEST_DIR.cfs" -o "basedir=$TEST_DIR" "$TEST_MOUNT"; then
|
||||
log_success "ComposeFS image mounted successfully"
|
||||
|
||||
# Test content
|
||||
log_info "Testing mounted content..."
|
||||
if [[ -f "$TEST_MOUNT/test.txt" ]]; then
|
||||
log_success "Test file found in mount"
|
||||
cat "$TEST_MOUNT/test.txt"
|
||||
else
|
||||
log_warning "Test file not found in mount"
|
||||
fi
|
||||
|
||||
if [[ -f "$TEST_MOUNT/testdir/subfile.txt" ]]; then
|
||||
log_success "Subdirectory file found in mount"
|
||||
cat "$TEST_MOUNT/testdir/subfile.txt"
|
||||
else
|
||||
log_warning "Subdirectory file not found in mount"
|
||||
fi
|
||||
|
||||
# Unmount
|
||||
log_info "Unmounting ComposeFS image..."
|
||||
umount "$TEST_MOUNT"
|
||||
log_success "ComposeFS image unmounted successfully"
|
||||
|
||||
else
|
||||
log_error "Failed to mount ComposeFS image"
|
||||
fi
|
||||
|
||||
# Clean up image
|
||||
rm -f "$TEST_DIR.cfs"
|
||||
|
||||
else
|
||||
log_error "Failed to create ComposeFS image"
|
||||
fi
|
||||
|
||||
else
|
||||
log_warning "Skipping functionality test (tools not available)"
|
||||
fi
|
||||
|
||||
# Phase 6: Summary
|
||||
log_info ""
|
||||
log_info "Phase 6: Test Summary"
|
||||
echo "---------------------"
|
||||
|
||||
log_info "Official ComposeFS Package Test completed"
|
||||
log_info "Check the output above for any issues or warnings"
|
||||
|
||||
if command -v mkcomposefs >/dev/null 2>&1 && command -v mount.composefs >/dev/null 2>&1; then
|
||||
log_success "✅ Official ComposeFS tools are available and functional"
|
||||
log_info "Particle-OS can now use official ComposeFS backend"
|
||||
else
|
||||
log_warning "⚠️ Official ComposeFS tools not available"
|
||||
log_info "Particle-OS will fall back to alternative implementation"
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
BIN
test.cfs
BIN
test.cfs
Binary file not shown.
1
test.txt
1
test.txt
|
|
@ -1 +0,0 @@
|
|||
hello world
|
||||
|
|
@ -1 +0,0 @@
|
|||
hello world
|
||||
15
tools.md
15
tools.md
|
|
@ -6,7 +6,7 @@ This document provides a comparison of the core tools used in uBlue-OS and their
|
|||
|
||||
| uBlue-OS Tool | Particle-OS Equivalent | Description |
|
||||
|---------------|----------------------|-------------|
|
||||
| **rpm-ostree** | **apt-layer** | Package management and atomic system updates. rpm-ostree handles RPM packages on Fedora, while apt-layer manages DEB packages on Ubuntu with atomic transactions and rollback capabilities. |
|
||||
| **rpm-ostree** | **apt-layer** | Package management and atomic system updates. rpm-ostree handles RPM packages on Fedora, while apt-layer manages DEB packages on Ubuntu with atomic transactions, rollback capabilities, and now true atomic OSTree commits per package operation. The new workflow supports offline .deb install, robust overlay system, and DNS fixes for WSL environments. |
|
||||
| **bootc** | **bootc-alternative** | Container-native bootable image management. Handles deployment, staging, rollback, and status reporting for immutable OS images. Particle-OS version includes Bazzite-style status output and deployment tracking. |
|
||||
| **bootupd** | **bootupd-alternative** | Bootloader management and configuration. Manages UEFI/GRUB entries, kernel arguments, and boot configuration for atomic OS deployments. |
|
||||
| **skopeo** | **skopeo** | Container image inspection, copying, and verification. Essential for secure image management, signature verification, and registry operations. Used by both systems for image handling. |
|
||||
|
|
@ -18,7 +18,7 @@ This document provides a comparison of the core tools used in uBlue-OS and their
|
|||
| **particle-config.sh** | Centralized configuration management for Particle-OS. Manages paths, settings, and system configuration across all Particle-OS tools. |
|
||||
| **particle-logrotate.sh** | Log rotation and management for Particle-OS tools. Ensures proper log file maintenance and prevents disk space issues. |
|
||||
| **dracut-module.sh** | Dracut module management for kernel initramfs generation. Handles custom kernel modules and boot-time initialization for Particle-OS. |
|
||||
| **Official ComposeFS Tools** | **ARCHIVED**: composefs-alternative.sh moved to archive. Particle-OS now uses official `mkcomposefs` and `mount.composefs` from upstream with automatic backend selection and fallback support. |
|
||||
| **Official ComposeFS Tools** | **ARCHIVED**: composefs-alternative.sh moved to archive. Particle-OS now uses official `mkcomposefs` and `mount.composefs` from upstream with automatic backend selection and fallback support. All apt-layer atomic commits use official ComposeFS tooling for image creation and mounting. |
|
||||
| **install-particle-os.sh** | Professional installation script for Particle-OS tools. Installs all core tools to `/usr/local/bin/` with standardized names and proper permissions. |
|
||||
| **install-ubuntu-particle.sh** | Complete Ubuntu Particle-OS system installation. Installs dependencies, creates directory structure, sets up systemd services, and configures the full immutable system environment. |
|
||||
| **oci-integration.sh** | OCI (Open Container Initiative) integration utilities. Particle-OS-specific wrapper that uses skopeo under the hood for registry operations, image pulling, and OCI compliance. Provides higher-level automation and workflow integration for Particle-OS tools. |
|
||||
|
|
@ -30,7 +30,7 @@ This document provides a comparison of the core tools used in uBlue-OS and their
|
|||
| **erofs-utils** | **EROFS Backend for ComposeFS** | Enhanced Read-Only File System utilities. Provides better performance than SquashFS for metadata operations, native fs-verity support, and LZ4/Zstandard compression. Integrates with composefs-alternative for official ComposeFS compatibility. |
|
||||
| **erofsfuse** | **FUSE Mount Support** | FUSE Mount Utility for EROFS File System. Enables user-space mounting of EROFS filesystems, useful for rootless operations and enhanced security. |
|
||||
| **overlayroot** | **Boot-time Immutability** | Native Ubuntu tool for read-only root filesystem with overlayfs. Provides system immutability, boot-time protection, and easy rollback capabilities. Integrates with dracut-module for enhanced boot-time security. |
|
||||
| **fuse-overlayfs** | **Rootless Container Support** | Implementation of overlay+shiftfs in FUSE for rootless containers. Enables container operations without root privileges, enhancing security for container-based workflows. |
|
||||
| **fuse-overlayfs** | **Rootless Container Support** | Implementation of overlay+shiftfs in FUSE for rootless containers. Enables container operations without root privileges, enhancing security for container-based workflows. Also used in the new apt-layer overlay/dpkg install workflow for atomic package management. |
|
||||
| **golang-github-bep-overlayfs-dev** | **Go Library Integration** | Composite Afero filesystem Go library. Provides programmatic access to overlayfs functionality for Go-based tools and services in the Particle-OS ecosystem. |
|
||||
|
||||
## Enhanced Integration Opportunities
|
||||
|
|
@ -59,7 +59,7 @@ This document provides a comparison of the core tools used in uBlue-OS and their
|
|||
|
||||
### **Phase 1: EROFS Integration**
|
||||
1. Install `erofs-utils` and `erofsfuse` packages
|
||||
2. Test EROFS functionality with composefs-alternative
|
||||
2. Test EROFS functionality with composefs-alternative (now archived; official ComposeFS tools are default)
|
||||
3. Implement automatic detection and fallback logic
|
||||
4. Add EROFS compression and optimization features
|
||||
5. Benchmark performance against current SquashFS approach
|
||||
|
|
@ -72,7 +72,7 @@ This document provides a comparison of the core tools used in uBlue-OS and their
|
|||
5. Document usage and benefits
|
||||
|
||||
### **Phase 3: FUSE Enhancements**
|
||||
1. Test `fuse-overlayfs` for rootless container support
|
||||
1. Test `fuse-overlayfs` for rootless container support and overlay/dpkg install workflow
|
||||
2. Evaluate Go library integration opportunities
|
||||
3. Implement enhanced security features
|
||||
4. Add comprehensive testing and validation
|
||||
|
|
@ -80,10 +80,11 @@ This document provides a comparison of the core tools used in uBlue-OS and their
|
|||
## Notes
|
||||
|
||||
- **Skopeo** is a shared dependency used by both uBlue-OS and Particle-OS for container image operations
|
||||
- **Official ComposeFS Tools**: Particle-OS now uses official `mkcomposefs` and `mount.composefs` from upstream. The alternative implementation has been archived.
|
||||
- **Official ComposeFS Tools**: Particle-OS now uses official `mkcomposefs` and `mount.composefs` from upstream. The alternative implementation has been archived. All atomic package management in apt-layer uses these tools for image creation and mounting.
|
||||
- **EROFS integration** provides a path to official ComposeFS compatibility while maintaining Particle-OS enhancements
|
||||
- **Overlayroot** offers a simpler alternative to complex dracut-module implementations for boot-time immutability
|
||||
- **FUSE-based tools** enable enhanced security and rootless operations
|
||||
- Particle-OS tools maintain compatibility with uBlue-OS workflows while adding Ubuntu-specific features and optimizations
|
||||
- All Particle-OS tools include comprehensive error handling, logging, and user-friendly interfaces
|
||||
- **Ubuntu ecosystem integration** leverages native Ubuntu tools for better performance and compatibility
|
||||
- **Ubuntu ecosystem integration** leverages native Ubuntu tools for better performance and compatibility
|
||||
- **apt-layer** now supports atomic OSTree commits, robust overlay/dpkg install, and official ComposeFS integration.
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Transfer ComposeFS files to particle-os VM for testing
|
||||
|
||||
echo "Transferring ComposeFS files to particle-os VM..."
|
||||
|
||||
# VM IP address from the user's output
|
||||
VM_IP="172.23.125.172"
|
||||
VM_USER="joe"
|
||||
|
||||
# Transfer the compiled script
|
||||
echo "Transferring composefs-alternative.sh..."
|
||||
scp composefs-alternative.sh ${VM_USER}@${VM_IP}:/tmp/
|
||||
|
||||
# Transfer test scripts
|
||||
echo "Transferring test scripts..."
|
||||
scp src/composefs/test-scripts/test-official-composefs-integration.sh ${VM_USER}@${VM_IP}:/tmp/
|
||||
scp src/composefs/test-scripts/test-composefs-basic.sh ${VM_USER}@${VM_IP}:/tmp/
|
||||
|
||||
# Transfer documentation
|
||||
echo "Transferring documentation..."
|
||||
scp src/composefs/docs/official-composefs-integration.md ${VM_USER}@${VM_IP}:/tmp/
|
||||
|
||||
echo "Transfer complete!"
|
||||
echo ""
|
||||
echo "To test on the VM, SSH to the VM and run:"
|
||||
echo " ssh ${VM_USER}@${VM_IP}"
|
||||
echo " cd /tmp"
|
||||
echo " chmod +x composefs-alternative.sh"
|
||||
echo " ./composefs-alternative.sh official-status"
|
||||
echo " ./test-official-composefs-integration.sh"
|
||||
Loading…
Add table
Add a link
Reference in a new issue