Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/OWASP/Nest/llms.txt

Use this file to discover all available pages before exploring further.

Overview

OWASP Nest maintains high code quality with comprehensive test coverage. All pull requests must pass automated tests and maintain minimum coverage thresholds.

Backend Coverage

Minimum: 95%Pytest with Django integration

Frontend Coverage

Minimum: 95%Jest with React Testing Library
Pull requests that fail tests or drop below coverage thresholds will not be merged.

Running Tests

All Tests

make test

Specific Test Types

make test-backend

Backend Tests

Test Configuration

Backend tests use pytest with Django plugin:
pyproject.toml
[tool.pytest]
ini_options.DJANGO_CONFIGURATION = "Test"
ini_options.DJANGO_SETTINGS_MODULE = "settings.test"
ini_options.addopts = [
  "--cov-config=pyproject.toml",
  "--cov-fail-under=95",         # Minimum 95% coverage
  "--cov-precision=2",
  "--cov-report=term-missing",   # Show missing lines
  "--cov-report=xml",            # Generate XML report
  "--cov=.",                      # Coverage for all code
  "--dist=loadscope",            # Distribute tests
  "--numprocesses=auto",         # Parallel execution
]

Test Structure

backend/tests/
├── test_models.py          # Model tests
├── test_api.py             # REST API tests
├── test_graphql.py         # GraphQL tests
├── test_commands.py        # Management command tests
├── test_integrations.py    # External service tests
└── fixtures/               # Test fixtures
    ├── projects.json
    └── users.json

Writing Backend Tests

tests/test_models.py
import pytest
from apps.owasp.models import Project

@pytest.mark.django_db
class TestProjectModel:
    def test_create_project(self):
        """Test creating a project."""
        project = Project.objects.create(
            name="Test Project",
            description="A test project",
            level="Lab",
            type="Code",
        )
        
        assert project.name == "Test Project"
        assert project.level == "Lab"
        assert project.slug == "test-project"
    
    def test_project_str(self):
        """Test project string representation."""
        project = Project.objects.create(
            name="Test Project",
            description="Test",
        )
        
        assert str(project) == "Test Project"
    
    def test_project_url_validation(self):
        """Test project URL validation."""
        with pytest.raises(ValidationError):
            Project.objects.create(
                name="Test",
                url="not-a-url",
            )

Fixtures

tests/conftest.py
import pytest
from apps.owasp.models import Project, Chapter
from apps.github.models import GitHubUser

@pytest.fixture
def sample_project():
    """Create a sample project for testing."""
    return Project.objects.create(
        name="Sample Project",
        description="A sample project for testing",
        level="Lab",
    )

@pytest.fixture
def sample_user():
    """Create a sample GitHub user."""
    return GitHubUser.objects.create(
        login="testuser",
        name="Test User",
        email="test@example.com",
    )

@pytest.fixture
def authenticated_client(sample_user):
    """Create an authenticated client."""
    from django.test import Client
    client = Client()
    client.force_login(sample_user)
    return client
Usage:
def test_with_fixtures(sample_project, authenticated_client):
    response = authenticated_client.get(
        f"/api/v0/projects/{sample_project.slug}/"
    )
    assert response.status_code == 200

Mocking External Services

tests/test_integrations.py
import pytest
from unittest.mock import Mock, patch
from apps.github.services import GitHubService

@pytest.mark.django_db
class TestGitHubIntegration:
    @patch('apps.github.services.Github')
    def test_fetch_repositories(self, mock_github):
        """Test fetching repositories from GitHub."""
        # Arrange
        mock_repo = Mock()
        mock_repo.name = "test-repo"
        mock_repo.description = "Test repository"
        
        mock_org = Mock()
        mock_org.get_repos.return_value = [mock_repo]
        
        mock_github.return_value.get_organization.return_value = mock_org
        
        # Act
        service = GitHubService()
        repos = service.fetch_repositories("OWASP")
        
        # Assert
        assert len(repos) == 1
        assert repos[0].name == "test-repo"
        mock_github.return_value.get_organization.assert_called_with("OWASP")

Frontend Tests

Test Configuration

Frontend tests use Jest with React Testing Library:
jest.config.ts
const config: Config = {
  collectCoverage: true,
  coverageThreshold: {
    global: {
      branches: 95,
      functions: 95,
      lines: 95,
      statements: 95,
    },
  },
  testEnvironment: 'jest-environment-jsdom',
  setupFilesAfterEnv: ['<rootDir>/jest.setup.ts'],
}

Test Structure

frontend/__tests__/
├── unit/                   # Unit tests
│   ├── components/
│   │   ├── ProjectCard.test.tsx
│   │   └── SearchBar.test.tsx
│   └── utils/
│       └── formatDate.test.ts
├── a11y/                   # Accessibility tests
│   └── pages.test.tsx
├── e2e/                    # End-to-end tests
│   ├── auth.spec.ts
│   └── projects.spec.ts
└── mockData/               # Test data
    └── projects.ts

Writing Frontend Tests

__tests__/unit/components/ProjectCard.test.tsx
import { render, screen } from '@testing-library/react'
import { ProjectCard } from '@/components/ProjectCard'

describe('ProjectCard', () => {
  const mockProject = {
    id: '1',
    name: 'OWASP Top 10',
    description: 'Top 10 Web Application Security Risks',
    level: 'Flagship',
    url: 'https://owasp.org/www-project-top-ten/',
  }
  
  it('renders project information', () => {
    render(<ProjectCard project={mockProject} />)
    
    expect(screen.getByText('OWASP Top 10')).toBeInTheDocument()
    expect(screen.getByText(/Top 10 Web Application/)).toBeInTheDocument()
    expect(screen.getByText('Flagship')).toBeInTheDocument()
  })
  
  it('renders a link to the project', () => {
    render(<ProjectCard project={mockProject} />)
    
    const link = screen.getByRole('link', { name: /view project/i })
    expect(link).toHaveAttribute('href', mockProject.url)
  })
  
  it('displays project level badge', () => {
    render(<ProjectCard project={mockProject} />)
    
    const badge = screen.getByText('Flagship')
    expect(badge).toHaveClass('badge-flagship')
  })
})

Accessibility Tests

__tests__/a11y/pages.test.tsx
import { render } from '@testing-library/react'
import { axe, toHaveNoViolations } from 'jest-axe'
import Home from '@/app/page'

expect.extend(toHaveNoViolations)

describe('Accessibility', () => {
  it('home page has no accessibility violations', async () => {
    const { container } = render(<Home />)
    const results = await axe(container)
    
    expect(results).toHaveNoViolations()
  })
  
  it('projects page has no accessibility violations', async () => {
    const { container } = render(<ProjectsPage />)
    const results = await axe(container)
    
    expect(results).toHaveNoViolations()
  })
})

E2E Tests with Playwright

__tests__/e2e/projects.spec.ts
import { test, expect } from '@playwright/test'

test.describe('Projects', () => {
  test('should display project list', async ({ page }) => {
    await page.goto('/projects')
    
    // Check page title
    await expect(page).toHaveTitle(/Projects/)
    
    // Check projects are displayed
    const projects = page.locator('[data-testid="project-card"]')
    await expect(projects).toHaveCount(10)
  })
  
  test('should filter projects by level', async ({ page }) => {
    await page.goto('/projects')
    
    // Select filter
    await page.selectOption('[name="level"]', 'Flagship')
    
    // Wait for results
    await page.waitForLoadState('networkidle')
    
    // Check filtered results
    const projects = page.locator('[data-testid="project-card"]')
    const firstProject = projects.first()
    await expect(firstProject).toContainText('Flagship')
  })
  
  test('should navigate to project detail', async ({ page }) => {
    await page.goto('/projects')
    
    // Click first project
    await page.click('[data-testid="project-card"]:first-child a')
    
    // Check detail page
    await expect(page).toHaveURL(/\/projects\/[^/]+$/)
    await expect(page.locator('h1')).toBeVisible()
  })
})

Fuzz Testing

Fuzz testing tests API endpoints with random/invalid inputs:
make test-fuzz
Uses Schemathesis to automatically generate test cases from OpenAPI schema:
import schemathesis

schema = schemathesis.from_uri("http://backend:8000/api/docs/openapi.json")

@schema.parametrize()
def test_api_fuzzing(case):
    case.call_and_validate()

Security Scanning

Code Security

make security-scan-code
Runs:
  • Semgrep - Static analysis for security patterns
  • Trivy - Vulnerability scanning

Image Security

make security-scan-images
Scans Docker images for:
  • Known vulnerabilities
  • Misconfigurations
  • Exposed secrets

ZAP Scanning

make security-scan-zap
Runs OWASP ZAP baseline scan for:
  • XSS vulnerabilities
  • SQL injection
  • CSRF issues
  • Security headers

Coverage Reports

Viewing Coverage

# Run tests (generates coverage.xml)
make test-backend

# View in terminal
# Coverage report shown after tests

# Open HTML report
cd backend
coverage html
open htmlcov/index.html

Coverage Thresholds

Both backend and frontend require 95% coverage:
[tool.pytest]
ini_options.addopts = [
  "--cov-fail-under=95",
]

CI/CD Integration

Tests run automatically on every pull request:
.github/workflows/test.yml
name: Tests

on: [push, pull_request]

jobs:
  backend-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run backend tests
        run: make test-backend
      - name: Upload coverage
        uses: codecov/codecov-action@v3
  
  frontend-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run frontend tests
        run: make test-frontend
      - name: Upload coverage
        uses: codecov/codecov-action@v3

Best Practices

Follow Test-Driven Development (TDD):
  1. Write failing test
  2. Implement feature
  3. Make test pass
  4. Refactor
// Bad: Testing implementation
expect(component.state.count).toBe(1)

// Good: Testing behavior
expect(screen.getByText('Count: 1')).toBeInTheDocument()
# Bad
def test_project():
    pass

# Good
def test_create_project_with_valid_data_succeeds():
    pass
Always mock:
  • External APIs (GitHub, Slack, OpenAI)
  • File system operations
  • Network requests
  • Time-dependent functions
Each test should:
  • Run independently
  • Not depend on other tests
  • Clean up after itself
  • Use fixtures for setup
Test:
  • Empty inputs
  • Invalid data
  • Boundary conditions
  • Error scenarios

Debugging Tests

# Add breakpoint
import pdb; pdb.set_trace()

# Run single test
pytest tests/test_models.py::TestProjectModel::test_create_project

# Run with verbose output
pytest -vv

# Show print statements
pytest -s

Next Steps

Contributing

Submit your changes

Backend Guide

Backend development

Frontend Guide

Frontend development

Architecture

System architecture