- Add complete pytest testing framework with conftest.py and test files - Add performance monitoring and benchmarking capabilities - Add plugin system with ccache plugin example - Add comprehensive documentation (API, deployment, testing, etc.) - Add Docker API wrapper for service deployment - Add advanced configuration examples - Remove old wget package file - Update core modules with enhanced functionality
14 KiB
Deb-Mock Testing Guide
Overview
The deb-mock project includes a comprehensive test suite that covers all major functionality including core operations, performance monitoring, plugin system, and integration testing. This guide provides detailed information on running tests, understanding test coverage, and contributing to the test suite.
Test Structure
Test Organization
tests/
├── __init__.py # Test package initialization
├── conftest.py # Pytest configuration and fixtures
├── test_core.py # Core functionality tests
├── test_performance.py # Performance monitoring tests
├── test_plugin_system.py # Plugin system tests
└── requirements.txt # Test dependencies
Test Categories
- Unit Tests - Test individual components in isolation
- Integration Tests - Test component interactions
- Performance Tests - Test performance monitoring system
- Plugin Tests - Test plugin system functionality
- System Tests - Test end-to-end workflows
Running Tests
Prerequisites
-
Python Virtual Environment: Ensure you have activated the virtual environment
source venv/bin/activate -
Test Dependencies: Install required testing packages
pip install -r tests/requirements.txt
Basic Test Execution
Run All Tests
python -m pytest tests/
Run Specific Test File
python -m pytest tests/test_core.py
Run Specific Test Class
python -m pytest tests/test_performance.py::TestPerformanceMonitor
Run Specific Test Method
python -m pytest tests/test_performance.py::TestPerformanceMonitor::test_initialization
Using the Test Runner Script
The project includes a comprehensive test runner script that provides additional functionality:
Run All Tests with Coverage
python run_tests.py --all --coverage-report
Run Specific Test Types
# Unit tests only
python run_tests.py --unit
# Integration tests only
python run_tests.py --integration
# Performance tests only
python run_tests.py --performance
# Plugin system tests only
python run_tests.py --plugin
Parallel Test Execution
python run_tests.py --all --parallel
Verbose Output
python run_tests.py --all --verbose
Additional Quality Checks
# Run linting
python run_tests.py --lint
# Run type checking
python run_tests.py --type-check
# Run security scanning
python run_tests.py --security
Test Runner Options
| Option | Description |
|---|---|
--unit |
Run unit tests only |
--integration |
Run integration tests only |
--performance |
Run performance tests only |
--plugin |
Run plugin system tests only |
--all |
Run all tests |
--parallel |
Run tests in parallel |
--no-coverage |
Disable coverage reporting |
--verbose, -v |
Verbose output |
--install-deps |
Install test dependencies |
--lint |
Run code linting |
--type-check |
Run type checking |
--security |
Run security scanning |
--coverage-report |
Generate coverage report |
Test Configuration
Pytest Configuration (pytest.ini)
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
-v
--tb=short
--strict-markers
--disable-warnings
--cov=deb_mock
--cov-report=term-missing
--cov-report=html:htmlcov
--cov-report=xml:coverage.xml
--cov-fail-under=80
markers =
slow: marks tests as slow
integration: marks tests as integration tests
unit: marks tests as unit tests
performance: marks tests as performance tests
plugin: marks tests as plugin system tests
Coverage Configuration
- Minimum Coverage: 80%
- Coverage Reports: Terminal, HTML, XML
- Coverage Output:
htmlcov/directory
Test Fixtures
Common Fixtures (conftest.py)
The test suite provides comprehensive fixtures for testing:
Configuration Fixtures
test_config- Basic test configurationperformance_test_config- Configuration with performance monitoringplugin_test_config- Configuration with plugin support
Mock Fixtures
mock_chroot_manager- Mock chroot managermock_cache_manager- Mock cache managermock_sbuild_wrapper- Mock sbuild wrappermock_plugin_manager- Mock plugin managermock_performance_monitor- Mock performance monitor
Test Data Fixtures
sample_source_package- Minimal Debian source packagetest_package_data- Package metadata for testingtest_build_result- Build result datatest_performance_metrics- Performance metrics data
Environment Fixtures
temp_dir- Temporary directory for teststest_environment- Test environment variablesisolated_filesystem- Isolated filesystem for testing
Test Categories
1. Core Functionality Tests (test_core.py)
Tests the main DebMock class and its core operations:
- Initialization - Component initialization and configuration
- Build Operations - Package building with various scenarios
- Chroot Management - Chroot creation, restoration, and cleanup
- Cache Operations - Cache restoration and creation
- Plugin Integration - Hook execution and plugin lifecycle
- Performance Monitoring - Performance tracking integration
- Error Handling - Build failures and error scenarios
Example Test
def test_build_with_existing_chroot(self, mock_deb_mock, sample_source_package,
mock_chroot_manager, mock_cache_manager,
mock_sbuild_wrapper, mock_plugin_manager,
mock_performance_monitor):
"""Test building with an existing chroot"""
# Mock the components
mock_deb_mock.chroot_manager = mock_chroot_manager
mock_deb_mock.cache_manager = mock_cache_manager
mock_deb_mock.sbuild_wrapper = mock_sbuild_wrapper
mock_deb_mock.plugin_manager = mock_plugin_manager
mock_deb_mock.performance_monitor = mock_performance_monitor
# Mock chroot exists
mock_chroot_manager.chroot_exists.return_value = True
# Run build
result = mock_deb_mock.build(sample_source_package)
# Verify result
assert result["success"] is True
2. Performance Monitoring Tests (test_performance.py)
Tests the performance monitoring and optimization system:
- PerformanceMetrics - Metrics data structure validation
- BuildProfile - Build performance profile management
- PerformanceMonitor - Real-time monitoring and metrics collection
- PerformanceOptimizer - AI-driven optimization suggestions
- PerformanceReporter - Report generation and data export
Example Test
def test_monitor_operation_context_manager(self, test_config):
"""Test monitor_operation context manager"""
test_config.enable_performance_monitoring = True
monitor = PerformanceMonitor(test_config)
with monitor.monitor_operation("test_op") as op_id:
assert op_id.startswith("test_op_")
time.sleep(0.1) # Small delay
# Verify operation was tracked
assert len(monitor._operation_history) == 1
assert monitor._operation_history[0].operation == "test_op"
assert monitor._operation_history[0].duration > 0
3. Plugin System Tests (test_plugin_system.py)
Tests the extensible plugin system:
- HookStages - Hook stage definitions and values
- BasePlugin - Base plugin class functionality
- PluginManager - Plugin discovery, loading, and management
- Plugin Lifecycle - Initialization, execution, and cleanup
- Hook System - Hook registration and execution
- Error Handling - Plugin error scenarios
Example Test
def test_plugin_lifecycle(self, test_config):
"""Test complete plugin lifecycle"""
manager = PluginManager(test_config)
# Create a test plugin
class TestPlugin(BasePlugin):
def __init__(self):
super().__init__(
name="TestPlugin",
version="1.0.0",
description="Test plugin for integration testing"
)
self.init_called = False
self.cleanup_called = False
def init(self, deb_mock):
self.init_called = True
return None
def cleanup(self):
self.cleanup_called = True
return None
# Test plugin lifecycle
plugin = TestPlugin()
manager.plugins["test_plugin"] = plugin
# Initialize
mock_deb_mock = Mock()
result = manager.init_plugins(mock_deb_mock)
assert result is True
assert plugin.init_called is True
# Cleanup
cleanup_result = manager.cleanup_plugins()
assert cleanup_result is True
assert plugin.cleanup_called is True
Test Markers
Available Markers
@pytest.mark.slow- Marks tests as slow (can be deselected)@pytest.mark.integration- Marks tests as integration tests@pytest.mark.unit- Marks tests as unit tests@pytest.mark.performance- Marks tests as performance tests@pytest.mark.plugin- Marks tests as plugin system tests
Using Markers
Run Only Fast Tests
python -m pytest -m "not slow"
Run Only Integration Tests
python -m pytest -m integration
Run Multiple Marker Types
python -m pytest -m "unit or performance"
Coverage Reporting
Coverage Types
- Terminal Coverage - Inline coverage information
- HTML Coverage - Detailed HTML report in
htmlcov/directory - XML Coverage - Machine-readable coverage data
Coverage Thresholds
- Minimum Coverage: 80%
- Coverage Failure: Tests fail if coverage drops below threshold
Generating Coverage Reports
# Generate all coverage reports
python run_tests.py --coverage-report
# Generate specific coverage report
python -m coverage report
python -m coverage html
Test Data Management
Temporary Files
Tests use temporary directories that are automatically cleaned up:
@pytest.fixture
def temp_dir():
"""Create a temporary directory for tests"""
temp_dir = tempfile.mkdtemp(prefix="deb_mock_test_")
yield temp_dir
shutil.rmtree(temp_dir, ignore_errors=True)
Mock Data
Tests use realistic mock data for comprehensive testing:
@pytest.fixture
def sample_source_package(temp_dir):
"""Create a minimal Debian source package for testing"""
package_dir = os.path.join(temp_dir, "test-package")
os.makedirs(package_dir)
# Create debian/control
debian_dir = os.path.join(package_dir, "debian")
os.makedirs(debian_dir)
# Add package files...
return package_dir
Debugging Tests
Verbose Output
python -m pytest -v -s tests/
Debugging Specific Tests
# Run with debugger
python -m pytest --pdb tests/test_core.py::TestDebMock::test_build
# Run with trace
python -m pytest --trace tests/test_core.py::TestDebMock::test_build
Test Isolation
# Run single test in isolation
python -m pytest -x tests/test_core.py::TestDebMock::test_build
# Stop on first failure
python -m pytest -x tests/
Continuous Integration
CI/CD Integration
The test suite is designed for CI/CD environments:
# GitHub Actions example
- name: Run Tests
run: |
source venv/bin/activate
python run_tests.py --all --coverage-report --parallel
- name: Upload Coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
Test Parallelization
Tests can be run in parallel for faster execution:
# Auto-detect CPU cores
python -m pytest -n auto tests/
# Specific number of workers
python -m pytest -n 4 tests/
Best Practices
Writing Tests
- Test Naming - Use descriptive test names that explain the scenario
- Test Isolation - Each test should be independent and not affect others
- Mock External Dependencies - Use mocks for system calls and external services
- Test Data - Use realistic test data that represents real scenarios
- Error Scenarios - Test both success and failure cases
Test Organization
- Group Related Tests - Use test classes to group related functionality
- Use Fixtures - Leverage pytest fixtures for common setup
- Test Categories - Use markers to categorize tests
- Coverage - Aim for high test coverage (80% minimum)
Performance Testing
- Realistic Scenarios - Test with realistic data sizes and complexity
- Benchmarking - Use the performance monitoring system for benchmarks
- Resource Monitoring - Monitor CPU, memory, and I/O during tests
- Regression Detection - Detect performance regressions
Troubleshooting
Common Issues
Import Errors
# Ensure virtual environment is activated
source venv/bin/activate
# Install test dependencies
pip install -r tests/requirements.txt
Coverage Issues
# Clear coverage data
python -m coverage erase
# Run tests with coverage
python -m pytest --cov=deb_mock tests/
Test Failures
# Run with verbose output
python -m pytest -v -s tests/
# Run specific failing test
python -m pytest tests/test_core.py::TestDebMock::test_build -v -s
Getting Help
- Check Test Output - Review test output for error details
- Review Fixtures - Ensure test fixtures are properly configured
- Check Dependencies - Verify all test dependencies are installed
- Review Configuration - Check pytest.ini and test configuration
Contributing to Tests
Adding New Tests
- Follow Naming Convention - Use
test_*.pyfor test files - Use Existing Fixtures - Leverage existing fixtures when possible
- Add Markers - Use appropriate test markers
- Maintain Coverage - Ensure new code is covered by tests
Test Review Process
- Test Coverage - Ensure new functionality has adequate test coverage
- Test Quality - Tests should be clear, maintainable, and reliable
- Performance Impact - Tests should not significantly impact build times
- Documentation - Document complex test scenarios and edge cases
This comprehensive testing guide ensures that the deb-mock project maintains high quality and reliability through extensive testing coverage.