Files
vgui-cicd/Documentation/TESTING.md
Adrian A. Baumann d1623b64f2 Add comprehensive testing documentation
- Describe test structure and organization
- Document how to run tests (all, specific, individual)
- Explain test coverage options
- List what each test suite covers
- Provide CI/CD integration examples
- Include debugging tips
- Show best practices for writing new tests
2025-10-24 17:26:30 +00:00

5.9 KiB

Running Tests

This document describes how to run the test suite for the vgui-cicd project.

Test Structure

The project includes comprehensive unit tests for the diagram caching functionality:

abschnitte/
├── tests.py                              # Tests for rendering functions
├── management/
│   └── commands/
│       └── test_clear_diagram_cache.py  # Tests for management command

diagramm_proxy/
└── test_diagram_cache.py                # Tests for caching module

Running All Tests

To run the entire test suite:

python manage.py test

Running Specific Test Modules

Run tests for a specific app:

# Test abschnitte rendering
python manage.py test abschnitte

# Test diagram caching
python manage.py test diagramm_proxy

# Test management commands
python manage.py test abschnitte.management.commands

Running Individual Test Cases

Run a specific test case:

# Test diagram cache functionality
python manage.py test diagramm_proxy.test_diagram_cache.DiagramCacheTestCase

# Test rendering functions
python manage.py test abschnitte.tests.RenderTextabschnitteTestCase

# Test management command
python manage.py test abschnitte.management.commands.test_clear_diagram_cache

Running Individual Tests

Run a single test method:

python manage.py test abschnitte.tests.RenderTextabschnitteTestCase.test_render_diagram_success

Test Coverage

To generate a coverage report, install coverage.py:

pip install coverage

Then run:

# Run tests with coverage
coverage run --source='.' manage.py test

# Generate coverage report
coverage report

# Generate HTML coverage report
coverage html
# Open htmlcov/index.html in browser

Test Options

Verbose Output

Get more detailed output:

python manage.py test --verbosity=2

Keep Test Database

Keep the test database after tests complete (useful for debugging):

python manage.py test --keepdb

Fail Fast

Stop after first test failure:

python manage.py test --failfast

Parallel Testing

Run tests in parallel (faster for large test suites):

python manage.py test --parallel

What the Tests Cover

Diagram Caching Tests (diagramm_proxy/test_diagram_cache.py)

  • Hash computation and consistency
  • Cache path generation
  • Cache miss (diagram generation)
  • Cache hit (using cached diagrams)
  • HTTP error handling
  • Cache clearing (all and by type)
  • Unicode content handling
  • Timeout configuration
  • Full lifecycle integration tests

Rendering Tests (abschnitte/tests.py)

  • Text rendering with markdown
  • Unordered and ordered lists
  • Table rendering
  • Diagram rendering (success and error)
  • Diagram with custom options
  • Code blocks
  • Edge cases (empty content, missing types)
  • Multiple sections
  • Mixed content integration tests

Management Command Tests (abschnitte/management/commands/test_clear_diagram_cache.py)

  • Clearing all cache
  • Clearing by specific type
  • Clearing empty cache
  • Command help text
  • Full workflow integration

Continuous Integration

These tests are designed to run in CI/CD pipelines. Example GitHub Actions workflow:

name: Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: 3.11
      - name: Install dependencies
        run: |
          pip install -r requirements.txt
      - name: Run tests
        run: |
          python manage.py test --verbosity=2

Debugging Failed Tests

If tests fail, you can:

  1. Run with verbose output to see detailed error messages:

    python manage.py test --verbosity=2
    
  2. Run specific failing test to isolate the issue:

    python manage.py test path.to.TestCase.test_method
    
  3. Use pdb for debugging:

    import pdb; pdb.set_trace()  # Add to test code
    
  4. Check test database:

    python manage.py test --keepdb
    

Test Requirements

The tests use:

  • Django's built-in TestCase class
  • Python's unittest.mock for mocking external dependencies
  • @override_settings for temporary setting changes
  • tempfile for creating temporary directories

No additional testing libraries are required beyond what's in requirements.txt.

Writing New Tests

When adding new features, follow these patterns:

  1. Test file location: Place tests in the same app as the code being tested
  2. Test class naming: Use descriptive names ending in TestCase
  3. Test method naming: Start with test_ and describe what's being tested
  4. Use mocks: Mock external dependencies (HTTP calls, file systems when needed)
  5. Clean up: Use setUp() and tearDown() or temporary directories
  6. Test edge cases: Include tests for error conditions and edge cases

Example:

from django.test import TestCase
from unittest.mock import patch

class MyFeatureTestCase(TestCase):
    def setUp(self):
        """Set up test fixtures."""
        self.test_data = "example"
    
    def test_basic_functionality(self):
        """Test that the feature works correctly."""
        result = my_function(self.test_data)
        self.assertEqual(result, expected_value)
    
    @patch('my_app.module.external_call')
    def test_with_mock(self, mock_call):
        """Test with mocked external dependency."""
        mock_call.return_value = "mocked"
        result = my_function(self.test_data)
        mock_call.assert_called_once()

Test Best Practices

  • Write tests before or alongside code (TDD approach)
  • Each test should test one thing
  • Tests should be independent (no shared state)
  • Use descriptive test names
  • Mock external dependencies (HTTP, filesystem, etc.)
  • Test both success and failure cases
  • Aim for high code coverage (>80%)