tests/
├── __init__.py
├── rules/ # Rule-specific tests
│ ├── __init__.py
│ ├── test_heading_rules.py
│ ├── test_block_rules.py
│ ├── test_image_rules.py
│ └── test_whitespace_rules.py
├── integration/ # Integration tests
│ ├── __init__.py
│ ├── test_parser_rules.py
│ └── test_cli_reporter.py
├── system/ # System tests
│ ├── __init__.py
│ ├── test_large_docs.py
│ └── test_real_projects.py
├── test_cli.py # CLI tests
├── test_parser.py # Parser tests
├── test_reporter.py # Reporter tests
└── test_linter.py # Core linter tests
Testing Strategy and Guide
.1. Test Strategy
.1.1. Goals and Objectives
The testing strategy for the AsciiDoc Linter aims to:
-
Ensure reliable detection of AsciiDoc formatting issues
-
Prevent false positives that could frustrate users
-
Maintain high code quality through comprehensive testing
-
Enable safe refactoring through good test coverage
-
Support rapid development through automated testing
.1.2. Test Levels
Unit Tests
-
Test individual rules in isolation
-
Verify rule logic and error detection
-
Cover edge cases and special scenarios
-
Test configuration options
Integration Tests
-
Test interaction between parser and rules
-
Verify correct document processing
-
Test CLI interface and options
-
Test reporter output formats
System Tests
-
End-to-end testing of the linter
-
Test with real AsciiDoc documents
-
Verify correct error reporting
-
Test performance with large documents
.1.3. Test Coverage Goals
Component | Target Coverage | Current Coverage |
---|---|---|
Core (parser, linter) |
90% |
0% |
Rules |
95% |
88% |
CLI |
80% |
0% |
Reporter |
85% |
0% |
Overall |
90% |
61% |
.1.4. Quality Metrics
-
Line Coverage: Minimum 90%
-
Branch Coverage: Minimum 85%
-
Mutation Score: Minimum 80%
-
Test Success Rate: 100%
-
No known bugs in production
.2. Test Implementation
.2.1. Test Organization
.2.2. Test Patterns
Rule Tests
def test_rule_pattern(self):
# Given: Setup test data and context
content = "test content"
rule = TestRule(config)
# When: Execute the rule
findings = rule.check(content)
# Then: Verify results
assert_findings(findings)
Integration Tests
def test_integration_pattern(self):
# Given: Setup test environment
doc = create_test_document()
linter = setup_linter()
# When: Process document
results = linter.process(doc)
# Then: Verify complete workflow
verify_results(results)
.2.3. Test Data Management
Test Documents
-
Maintain a collection of test documents
-
Include both valid and invalid examples
-
Document the purpose of each test file
-
Version control test data
Test Fixtures
-
Use pytest fixtures for common setup
-
Share test data between related tests
-
Clean up test environment after each test
-
Mock external dependencies
.3. Running Tests
.3.1. Local Development
# Run all tests
python run_tests.py
# Run with coverage
coverage run -m pytest
coverage report
coverage html
# Run specific test categories
pytest tests/rules/
pytest tests/integration/
pytest tests/system/
.3.2. Continuous Integration
GitHub Actions Workflow
name: Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, "3.10"]
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: |
coverage run -m pytest
coverage report
coverage xml
- name: Upload coverage
uses: codecov/codecov-action@v2
.4. Test Maintenance
.4.1. Regular Activities
-
Review test coverage reports weekly
-
Update tests for new features
-
Refactor tests when needed
-
Review test performance
-
Update test documentation
.4.2. Quality Checks
-
Run mutation testing monthly
-
Review test maintainability
-
Check for flaky tests
-
Verify test isolation
.5. Appendix
.5.1. Test Templates
Unit Test Template
class TestRuleName(unittest.TestCase):
def setUp(self):
"""Setup test environment"""
self.rule = RuleUnderTest()
def test_valid_case(self):
"""Test with valid input"""
# Given
content = "valid content"
# When
findings = self.rule.check(content)
# Then
self.assertEqual(len(findings), 0)
def test_invalid_case(self):
"""Test with invalid input"""
# Given
content = "invalid content"
# When
findings = self.rule.check(content)
# Then
self.assertEqual(len(findings), 1)
self.assertEqual(findings[0].severity, Severity.ERROR)
.5.2. Test Checklists
New Feature Checklist
-
❏ Unit tests written
-
❏ Integration tests updated
-
❏ System tests verified
-
❏ Coverage goals met
-
❏ Documentation updated
Test Review Checklist
-
❏ Tests follow patterns
-
❏ Coverage adequate
-
❏ Edge cases covered
-
❏ Error cases tested
-
❏ Documentation clear
.6. References
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.