- Add complete pytest testing framework with conftest.py and test files - Add performance monitoring and benchmarking capabilities - Add plugin system with ccache plugin example - Add comprehensive documentation (API, deployment, testing, etc.) - Add Docker API wrapper for service deployment - Add advanced configuration examples - Remove old wget package file - Update core modules with enhanced functionality
531 lines
14 KiB
Markdown
531 lines
14 KiB
Markdown
# Deb-Mock Testing Guide
|
|
|
|
## Overview
|
|
|
|
The `deb-mock` project includes a comprehensive test suite that covers all major functionality including core operations, performance monitoring, plugin system, and integration testing. This guide provides detailed information on running tests, understanding test coverage, and contributing to the test suite.
|
|
|
|
## Test Structure
|
|
|
|
### Test Organization
|
|
|
|
```
|
|
tests/
|
|
├── __init__.py # Test package initialization
|
|
├── conftest.py # Pytest configuration and fixtures
|
|
├── test_core.py # Core functionality tests
|
|
├── test_performance.py # Performance monitoring tests
|
|
├── test_plugin_system.py # Plugin system tests
|
|
└── requirements.txt # Test dependencies
|
|
```
|
|
|
|
### Test Categories
|
|
|
|
1. **Unit Tests** - Test individual components in isolation
|
|
2. **Integration Tests** - Test component interactions
|
|
3. **Performance Tests** - Test performance monitoring system
|
|
4. **Plugin Tests** - Test plugin system functionality
|
|
5. **System Tests** - Test end-to-end workflows
|
|
|
|
## Running Tests
|
|
|
|
### Prerequisites
|
|
|
|
1. **Python Virtual Environment**: Ensure you have activated the virtual environment
|
|
```bash
|
|
source venv/bin/activate
|
|
```
|
|
|
|
2. **Test Dependencies**: Install required testing packages
|
|
```bash
|
|
pip install -r tests/requirements.txt
|
|
```
|
|
|
|
### Basic Test Execution
|
|
|
|
#### Run All Tests
|
|
```bash
|
|
python -m pytest tests/
|
|
```
|
|
|
|
#### Run Specific Test File
|
|
```bash
|
|
python -m pytest tests/test_core.py
|
|
```
|
|
|
|
#### Run Specific Test Class
|
|
```bash
|
|
python -m pytest tests/test_performance.py::TestPerformanceMonitor
|
|
```
|
|
|
|
#### Run Specific Test Method
|
|
```bash
|
|
python -m pytest tests/test_performance.py::TestPerformanceMonitor::test_initialization
|
|
```
|
|
|
|
### Using the Test Runner Script
|
|
|
|
The project includes a comprehensive test runner script that provides additional functionality:
|
|
|
|
#### Run All Tests with Coverage
|
|
```bash
|
|
python run_tests.py --all --coverage-report
|
|
```
|
|
|
|
#### Run Specific Test Types
|
|
```bash
|
|
# Unit tests only
|
|
python run_tests.py --unit
|
|
|
|
# Integration tests only
|
|
python run_tests.py --integration
|
|
|
|
# Performance tests only
|
|
python run_tests.py --performance
|
|
|
|
# Plugin system tests only
|
|
python run_tests.py --plugin
|
|
```
|
|
|
|
#### Parallel Test Execution
|
|
```bash
|
|
python run_tests.py --all --parallel
|
|
```
|
|
|
|
#### Verbose Output
|
|
```bash
|
|
python run_tests.py --all --verbose
|
|
```
|
|
|
|
#### Additional Quality Checks
|
|
```bash
|
|
# Run linting
|
|
python run_tests.py --lint
|
|
|
|
# Run type checking
|
|
python run_tests.py --type-check
|
|
|
|
# Run security scanning
|
|
python run_tests.py --security
|
|
```
|
|
|
|
### Test Runner Options
|
|
|
|
| Option | Description |
|
|
|--------|-------------|
|
|
| `--unit` | Run unit tests only |
|
|
| `--integration` | Run integration tests only |
|
|
| `--performance` | Run performance tests only |
|
|
| `--plugin` | Run plugin system tests only |
|
|
| `--all` | Run all tests |
|
|
| `--parallel` | Run tests in parallel |
|
|
| `--no-coverage` | Disable coverage reporting |
|
|
| `--verbose`, `-v` | Verbose output |
|
|
| `--install-deps` | Install test dependencies |
|
|
| `--lint` | Run code linting |
|
|
| `--type-check` | Run type checking |
|
|
| `--security` | Run security scanning |
|
|
| `--coverage-report` | Generate coverage report |
|
|
|
|
## Test Configuration
|
|
|
|
### Pytest Configuration (`pytest.ini`)
|
|
|
|
```ini
|
|
[tool:pytest]
|
|
testpaths = tests
|
|
python_files = test_*.py
|
|
python_classes = Test*
|
|
python_functions = test_*
|
|
addopts =
|
|
-v
|
|
--tb=short
|
|
--strict-markers
|
|
--disable-warnings
|
|
--cov=deb_mock
|
|
--cov-report=term-missing
|
|
--cov-report=html:htmlcov
|
|
--cov-report=xml:coverage.xml
|
|
--cov-fail-under=80
|
|
markers =
|
|
slow: marks tests as slow
|
|
integration: marks tests as integration tests
|
|
unit: marks tests as unit tests
|
|
performance: marks tests as performance tests
|
|
plugin: marks tests as plugin system tests
|
|
```
|
|
|
|
### Coverage Configuration
|
|
|
|
- **Minimum Coverage**: 80%
|
|
- **Coverage Reports**: Terminal, HTML, XML
|
|
- **Coverage Output**: `htmlcov/` directory
|
|
|
|
## Test Fixtures
|
|
|
|
### Common Fixtures (`conftest.py`)
|
|
|
|
The test suite provides comprehensive fixtures for testing:
|
|
|
|
#### Configuration Fixtures
|
|
- `test_config` - Basic test configuration
|
|
- `performance_test_config` - Configuration with performance monitoring
|
|
- `plugin_test_config` - Configuration with plugin support
|
|
|
|
#### Mock Fixtures
|
|
- `mock_chroot_manager` - Mock chroot manager
|
|
- `mock_cache_manager` - Mock cache manager
|
|
- `mock_sbuild_wrapper` - Mock sbuild wrapper
|
|
- `mock_plugin_manager` - Mock plugin manager
|
|
- `mock_performance_monitor` - Mock performance monitor
|
|
|
|
#### Test Data Fixtures
|
|
- `sample_source_package` - Minimal Debian source package
|
|
- `test_package_data` - Package metadata for testing
|
|
- `test_build_result` - Build result data
|
|
- `test_performance_metrics` - Performance metrics data
|
|
|
|
#### Environment Fixtures
|
|
- `temp_dir` - Temporary directory for tests
|
|
- `test_environment` - Test environment variables
|
|
- `isolated_filesystem` - Isolated filesystem for testing
|
|
|
|
## Test Categories
|
|
|
|
### 1. Core Functionality Tests (`test_core.py`)
|
|
|
|
Tests the main `DebMock` class and its core operations:
|
|
|
|
- **Initialization** - Component initialization and configuration
|
|
- **Build Operations** - Package building with various scenarios
|
|
- **Chroot Management** - Chroot creation, restoration, and cleanup
|
|
- **Cache Operations** - Cache restoration and creation
|
|
- **Plugin Integration** - Hook execution and plugin lifecycle
|
|
- **Performance Monitoring** - Performance tracking integration
|
|
- **Error Handling** - Build failures and error scenarios
|
|
|
|
#### Example Test
|
|
```python
|
|
def test_build_with_existing_chroot(self, mock_deb_mock, sample_source_package,
|
|
mock_chroot_manager, mock_cache_manager,
|
|
mock_sbuild_wrapper, mock_plugin_manager,
|
|
mock_performance_monitor):
|
|
"""Test building with an existing chroot"""
|
|
# Mock the components
|
|
mock_deb_mock.chroot_manager = mock_chroot_manager
|
|
mock_deb_mock.cache_manager = mock_cache_manager
|
|
mock_deb_mock.sbuild_wrapper = mock_sbuild_wrapper
|
|
mock_deb_mock.plugin_manager = mock_plugin_manager
|
|
mock_deb_mock.performance_monitor = mock_performance_monitor
|
|
|
|
# Mock chroot exists
|
|
mock_chroot_manager.chroot_exists.return_value = True
|
|
|
|
# Run build
|
|
result = mock_deb_mock.build(sample_source_package)
|
|
|
|
# Verify result
|
|
assert result["success"] is True
|
|
```
|
|
|
|
### 2. Performance Monitoring Tests (`test_performance.py`)
|
|
|
|
Tests the performance monitoring and optimization system:
|
|
|
|
- **PerformanceMetrics** - Metrics data structure validation
|
|
- **BuildProfile** - Build performance profile management
|
|
- **PerformanceMonitor** - Real-time monitoring and metrics collection
|
|
- **PerformanceOptimizer** - AI-driven optimization suggestions
|
|
- **PerformanceReporter** - Report generation and data export
|
|
|
|
#### Example Test
|
|
```python
|
|
def test_monitor_operation_context_manager(self, test_config):
|
|
"""Test monitor_operation context manager"""
|
|
test_config.enable_performance_monitoring = True
|
|
monitor = PerformanceMonitor(test_config)
|
|
|
|
with monitor.monitor_operation("test_op") as op_id:
|
|
assert op_id.startswith("test_op_")
|
|
time.sleep(0.1) # Small delay
|
|
|
|
# Verify operation was tracked
|
|
assert len(monitor._operation_history) == 1
|
|
assert monitor._operation_history[0].operation == "test_op"
|
|
assert monitor._operation_history[0].duration > 0
|
|
```
|
|
|
|
### 3. Plugin System Tests (`test_plugin_system.py`)
|
|
|
|
Tests the extensible plugin system:
|
|
|
|
- **HookStages** - Hook stage definitions and values
|
|
- **BasePlugin** - Base plugin class functionality
|
|
- **PluginManager** - Plugin discovery, loading, and management
|
|
- **Plugin Lifecycle** - Initialization, execution, and cleanup
|
|
- **Hook System** - Hook registration and execution
|
|
- **Error Handling** - Plugin error scenarios
|
|
|
|
#### Example Test
|
|
```python
|
|
def test_plugin_lifecycle(self, test_config):
|
|
"""Test complete plugin lifecycle"""
|
|
manager = PluginManager(test_config)
|
|
|
|
# Create a test plugin
|
|
class TestPlugin(BasePlugin):
|
|
def __init__(self):
|
|
super().__init__(
|
|
name="TestPlugin",
|
|
version="1.0.0",
|
|
description="Test plugin for integration testing"
|
|
)
|
|
self.init_called = False
|
|
self.cleanup_called = False
|
|
|
|
def init(self, deb_mock):
|
|
self.init_called = True
|
|
return None
|
|
|
|
def cleanup(self):
|
|
self.cleanup_called = True
|
|
return None
|
|
|
|
# Test plugin lifecycle
|
|
plugin = TestPlugin()
|
|
manager.plugins["test_plugin"] = plugin
|
|
|
|
# Initialize
|
|
mock_deb_mock = Mock()
|
|
result = manager.init_plugins(mock_deb_mock)
|
|
assert result is True
|
|
assert plugin.init_called is True
|
|
|
|
# Cleanup
|
|
cleanup_result = manager.cleanup_plugins()
|
|
assert cleanup_result is True
|
|
assert plugin.cleanup_called is True
|
|
```
|
|
|
|
## Test Markers
|
|
|
|
### Available Markers
|
|
|
|
- **`@pytest.mark.slow`** - Marks tests as slow (can be deselected)
|
|
- **`@pytest.mark.integration`** - Marks tests as integration tests
|
|
- **`@pytest.mark.unit`** - Marks tests as unit tests
|
|
- **`@pytest.mark.performance`** - Marks tests as performance tests
|
|
- **`@pytest.mark.plugin`** - Marks tests as plugin system tests
|
|
|
|
### Using Markers
|
|
|
|
#### Run Only Fast Tests
|
|
```bash
|
|
python -m pytest -m "not slow"
|
|
```
|
|
|
|
#### Run Only Integration Tests
|
|
```bash
|
|
python -m pytest -m integration
|
|
```
|
|
|
|
#### Run Multiple Marker Types
|
|
```bash
|
|
python -m pytest -m "unit or performance"
|
|
```
|
|
|
|
## Coverage Reporting
|
|
|
|
### Coverage Types
|
|
|
|
1. **Terminal Coverage** - Inline coverage information
|
|
2. **HTML Coverage** - Detailed HTML report in `htmlcov/` directory
|
|
3. **XML Coverage** - Machine-readable coverage data
|
|
|
|
### Coverage Thresholds
|
|
|
|
- **Minimum Coverage**: 80%
|
|
- **Coverage Failure**: Tests fail if coverage drops below threshold
|
|
|
|
### Generating Coverage Reports
|
|
|
|
```bash
|
|
# Generate all coverage reports
|
|
python run_tests.py --coverage-report
|
|
|
|
# Generate specific coverage report
|
|
python -m coverage report
|
|
python -m coverage html
|
|
```
|
|
|
|
## Test Data Management
|
|
|
|
### Temporary Files
|
|
|
|
Tests use temporary directories that are automatically cleaned up:
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def temp_dir():
|
|
"""Create a temporary directory for tests"""
|
|
temp_dir = tempfile.mkdtemp(prefix="deb_mock_test_")
|
|
yield temp_dir
|
|
shutil.rmtree(temp_dir, ignore_errors=True)
|
|
```
|
|
|
|
### Mock Data
|
|
|
|
Tests use realistic mock data for comprehensive testing:
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def sample_source_package(temp_dir):
|
|
"""Create a minimal Debian source package for testing"""
|
|
package_dir = os.path.join(temp_dir, "test-package")
|
|
os.makedirs(package_dir)
|
|
|
|
# Create debian/control
|
|
debian_dir = os.path.join(package_dir, "debian")
|
|
os.makedirs(debian_dir)
|
|
|
|
# Add package files...
|
|
return package_dir
|
|
```
|
|
|
|
## Debugging Tests
|
|
|
|
### Verbose Output
|
|
|
|
```bash
|
|
python -m pytest -v -s tests/
|
|
```
|
|
|
|
### Debugging Specific Tests
|
|
|
|
```bash
|
|
# Run with debugger
|
|
python -m pytest --pdb tests/test_core.py::TestDebMock::test_build
|
|
|
|
# Run with trace
|
|
python -m pytest --trace tests/test_core.py::TestDebMock::test_build
|
|
```
|
|
|
|
### Test Isolation
|
|
|
|
```bash
|
|
# Run single test in isolation
|
|
python -m pytest -x tests/test_core.py::TestDebMock::test_build
|
|
|
|
# Stop on first failure
|
|
python -m pytest -x tests/
|
|
```
|
|
|
|
## Continuous Integration
|
|
|
|
### CI/CD Integration
|
|
|
|
The test suite is designed for CI/CD environments:
|
|
|
|
```yaml
|
|
# GitHub Actions example
|
|
- name: Run Tests
|
|
run: |
|
|
source venv/bin/activate
|
|
python run_tests.py --all --coverage-report --parallel
|
|
|
|
- name: Upload Coverage
|
|
uses: codecov/codecov-action@v3
|
|
with:
|
|
file: ./coverage.xml
|
|
```
|
|
|
|
### Test Parallelization
|
|
|
|
Tests can be run in parallel for faster execution:
|
|
|
|
```bash
|
|
# Auto-detect CPU cores
|
|
python -m pytest -n auto tests/
|
|
|
|
# Specific number of workers
|
|
python -m pytest -n 4 tests/
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### Writing Tests
|
|
|
|
1. **Test Naming** - Use descriptive test names that explain the scenario
|
|
2. **Test Isolation** - Each test should be independent and not affect others
|
|
3. **Mock External Dependencies** - Use mocks for system calls and external services
|
|
4. **Test Data** - Use realistic test data that represents real scenarios
|
|
5. **Error Scenarios** - Test both success and failure cases
|
|
|
|
### Test Organization
|
|
|
|
1. **Group Related Tests** - Use test classes to group related functionality
|
|
2. **Use Fixtures** - Leverage pytest fixtures for common setup
|
|
3. **Test Categories** - Use markers to categorize tests
|
|
4. **Coverage** - Aim for high test coverage (80% minimum)
|
|
|
|
### Performance Testing
|
|
|
|
1. **Realistic Scenarios** - Test with realistic data sizes and complexity
|
|
2. **Benchmarking** - Use the performance monitoring system for benchmarks
|
|
3. **Resource Monitoring** - Monitor CPU, memory, and I/O during tests
|
|
4. **Regression Detection** - Detect performance regressions
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
#### Import Errors
|
|
```bash
|
|
# Ensure virtual environment is activated
|
|
source venv/bin/activate
|
|
|
|
# Install test dependencies
|
|
pip install -r tests/requirements.txt
|
|
```
|
|
|
|
#### Coverage Issues
|
|
```bash
|
|
# Clear coverage data
|
|
python -m coverage erase
|
|
|
|
# Run tests with coverage
|
|
python -m pytest --cov=deb_mock tests/
|
|
```
|
|
|
|
#### Test Failures
|
|
```bash
|
|
# Run with verbose output
|
|
python -m pytest -v -s tests/
|
|
|
|
# Run specific failing test
|
|
python -m pytest tests/test_core.py::TestDebMock::test_build -v -s
|
|
```
|
|
|
|
### Getting Help
|
|
|
|
1. **Check Test Output** - Review test output for error details
|
|
2. **Review Fixtures** - Ensure test fixtures are properly configured
|
|
3. **Check Dependencies** - Verify all test dependencies are installed
|
|
4. **Review Configuration** - Check pytest.ini and test configuration
|
|
|
|
## Contributing to Tests
|
|
|
|
### Adding New Tests
|
|
|
|
1. **Follow Naming Convention** - Use `test_*.py` for test files
|
|
2. **Use Existing Fixtures** - Leverage existing fixtures when possible
|
|
3. **Add Markers** - Use appropriate test markers
|
|
4. **Maintain Coverage** - Ensure new code is covered by tests
|
|
|
|
### Test Review Process
|
|
|
|
1. **Test Coverage** - Ensure new functionality has adequate test coverage
|
|
2. **Test Quality** - Tests should be clear, maintainable, and reliable
|
|
3. **Performance Impact** - Tests should not significantly impact build times
|
|
4. **Documentation** - Document complex test scenarios and edge cases
|
|
|
|
This comprehensive testing guide ensures that the `deb-mock` project maintains high quality and reliability through extensive testing coverage.
|