From 65% to 100% — A Story of Quality, Discipline, and Technical Obsession
Every software engineer knows the feeling: your project works, it's functional, users are happy. But deep down, you know there's a gap. What if something breaks? What if edge cases slip through? What if the next feature introduces a silent bug?
This project started with a simple goal: build a production-ready Django REST API template with JWT authentication. The code worked. Tests existed. Coverage sat at a comfortable 65%. For many, that would be enough.
But "enough" isn't excellence.
This is the story of how I went from "good enough" to 100% test coverage — and what I learned about quality engineering along the way.
The objective was clear, almost utopian in its ambition:
"Bring test coverage from 65% to 90%+. Implement comprehensive test suites covering authentication flows, role-based permissions, and edge cases."
What seemed like a straightforward technical task quickly revealed itself as a journey through the depths of Django's authentication system, JWT token lifecycle, and the subtle art of mocking external dependencies.
This wasn't about hitting a metric. It was about:
- Building confidence — Every deployment should feel safe
- Learning deeply — Testing forces you to understand every code path
- Setting standards — If I can't test it, should I ship it?
- Public accountability — This repository is open, my learning journey visible to all
Let's be honest — 65% coverage isn't bad. Many production applications ship with less. But gaps existed:
Name Stmts Miss Cover
--------------------------------------------------
users/models.py 32 8 75%
users/views.py 28 12 57%
products/views.py 24 10 58%
myproject/settings.py 45 45 0%
--------------------------------------------------
TOTAL 265 98 65%
The missing pieces:
- JWT token rotation edge cases
- Blacklist mechanism verification
- Role permission boundaries
- Rate limiting under load
- Error handling paths
- Database transaction rollbacks
Each uncovered line represented a potential failure point. Each untested path was a future bug waiting to happen.
Before writing a single test, I needed the right tools:
Testing Stack:
pytest==7.4.3 # Modern test runner
pytest-django==4.7.0 # Django integration
pytest-cov==4.1.0 # Coverage reporting
factory-boy==3.3.0 # Test data generation
faker==20.1.0 # Realistic fake datapytest.ini Configuration:
[pytest]
DJANGO_SETTINGS_MODULE = myproject.settings
python_files = tests.py test_*.py *_tests.py
addopts =
--cov=.
--cov-report=html
--cov-report=term-missing
--strict-markers
-vThis wasn't just configuration — it was establishing a culture of quality.
The first push was straightforward — test the obvious paths:
Authentication Flow Tests:
@pytest.mark.django_db
def test_user_login_success(api_client, test_user):
"""Test successful user login returns JWT tokens"""
response = api_client.post('/api/login/', {
'username': test_user.username,
'password': 'testpass123'
})
assert response.status_code == 200
assert 'access' in response.data
assert 'refresh' in response.data
assert len(response.data['access']) > 50 # JWT is longRole Permission Tests:
@pytest.mark.parametrize('role,endpoint,method,expected_status', [
('client', '/api/products/', 'GET', 200),
('client', '/api/products/', 'POST', 403),
('staff', '/api/products/', 'POST', 201),
('admin', '/api/products/1/', 'DELETE', 204),
])
def test_role_based_permissions(
api_client, create_user, role, endpoint, method, expected_status
):
"""Test role-based access control across endpoints"""
user = create_user(role=role)
api_client.force_authenticate(user=user)
response = getattr(api_client, method.lower())(endpoint, {
'name': 'Test Product',
'price': '99.99'
})
assert response.status_code == expected_statusProgress: 65% → 85%
But this is where things got interesting.
The Hidden Complexity
The remaining 15% wasn't just "more tests" — it was the complex, subtle behavior that separates good code from great code.
Challenge 1: JWT Token Rotation
JWT refresh tokens should rotate — each refresh generates a new token and blacklists the old one. But how do you test that?
@pytest.mark.django_db
def test_refresh_token_rotation_and_blacklist(api_client, test_user):
"""Test that refresh tokens rotate and old ones are blacklisted"""
# Get initial tokens
login_response = api_client.post('/api/login/', {
'username': test_user.username,
'password': 'testpass123'
})
old_refresh = login_response.data['refresh']
# Use refresh token to get new tokens
refresh_response = api_client.post('/api/refresh/', {
'refresh': old_refresh
})
new_access = refresh_response.data['access']
# Old refresh token should now be blacklisted
blacklist_response = api_client.post('/api/refresh/', {
'refresh': old_refresh
})
assert blacklist_response.status_code == 401
assert 'blacklisted' in str(blacklist_response.data).lower()The Problem: This test kept failing. Why?
The token wasn't being properly blacklisted because the blacklist mechanism wasn't being triggered in the test environment. The solution required diving into SimpleJWT's internals and understanding Django's transaction management.
Challenge 2: Rate Limiting Verification
How do you test that rate limiting works without actually hitting the limit 100 times?
@pytest.mark.django_db
def test_login_rate_limiting(api_client):
"""Test rate limiting prevents brute force attacks"""
# Simulate multiple failed login attempts
for _ in range(10): # Limit is set to 5 attempts
api_client.post('/api/login/', {
'username': 'attacker',
'password': 'wrongpass'
})
# Next attempt should be rate limited
response = api_client.post('/api/login/', {
'username': 'attacker',
'password': 'wrongpass'
})
assert response.status_code == 429
assert 'throttled' in str(response.data).lower()The Problem: Rate limiting is time-based. Tests should be fast. Solution? Mock the time.
from unittest.mock import patch
from django.utils import timezone
@patch('django.utils.timezone.now')
def test_rate_limit_resets(mock_now, api_client):
"""Test that rate limit resets after time window"""
base_time = timezone.now()
mock_now.return_value = base_time
# Hit rate limit
for _ in range(6):
api_client.post('/api/login/', {})
# Fast forward 1 hour
mock_now.return_value = base_time + timezone.timedelta(hours=1)
# Should work again
response = api_client.post('/api/login/', {})
assert response.status_code != 429Progress: 85% → 95%
The final 5% was brutal. These were the edge cases, the "this should never happen" scenarios, the error handling that you hope never executes.
The Unreachable Code:
# In users/views.py
def create_user(self, validated_data):
try:
user = User.objects.create(**validated_data)
return user
except IntegrityError as e:
# This line was uncovered - when does this happen?
logger.error(f"User creation failed: {e}")
raise ValidationError("User already exists")How do you trigger an IntegrityError in a test?
@pytest.mark.django_db
def test_duplicate_user_creation_integrity_error(test_user):
"""Test handling of duplicate user creation"""
from django.contrib.auth.models import User
from rest_framework.exceptions import ValidationError
# First user exists from fixture
assert User.objects.filter(username=test_user.username).exists()
# Try to create duplicate - this triggers IntegrityError
with pytest.raises(ValidationError) as exc_info:
User.objects.create(
username=test_user.username,
email=test_user.email
)
assert "already exists" in str(exc_info.value).lower()The Exception Handlers:
Every except block needed coverage. Every fallback path needed verification.
@pytest.mark.django_db
def test_jwt_token_expired_gracefully(api_client, expired_token):
"""Test graceful handling of expired JWT tokens"""
api_client.credentials(HTTP_AUTHORIZATION=f'Bearer {expired_token}')
response = api_client.get('/api/protected/')
assert response.status_code == 401
assert 'expired' in str(response.data).lower()
# This covers the token expiration exception handlerThe Database Transaction Rollbacks:
@pytest.mark.django_db
def test_transaction_rollback_on_error(api_client, admin_user):
"""Test database rollback on creation error"""
api_client.force_authenticate(user=admin_user)
initial_count = Product.objects.count()
# Invalid data should rollback transaction
with pytest.raises(Exception):
api_client.post('/api/products/', {
'name': 'Test',
'price': 'invalid_price' # This will fail
})
# Count should remain unchanged
assert Product.objects.count() == initial_countAfter hours of debugging, reading SimpleJWT source code, and writing increasingly creative test scenarios:
Name Stmts Miss Cover
--------------------------------------------------
users/models.py 32 0 100%
users/views.py 28 0 100%
users/serializers.py 18 0 100%
products/models.py 12 0 100%
products/views.py 24 0 100%
products/serializers.py 15 0 100%
tests/conftest.py 45 0 100%
tests/factories.py 28 0 100%
--------------------------------------------------
TOTAL 265 0 100%
62+ tests. 265 lines covered. 0 lines missed.
But more importantly:
- ✅ Every authentication flow verified
- ✅ Every permission boundary tested
- ✅ Every error handler executed
- ✅ Every edge case documented
- ✅ Complete confidence in the codebase
Coverage measures execution, not correctness. A line can be covered by a test that doesn't actually verify the right behavior.
Bad Test (100% Coverage):
def test_user_login():
response = api_client.post('/api/login/', data)
assert response # This passes, but verifies nothing!Good Test (100% Coverage):
def test_user_login():
response = api_client.post('/api/login/', data)
assert response.status_code == 200
assert 'access' in response.data
assert 'refresh' in response.data
assert validate_jwt_structure(response.data['access'])Hard-to-test code is often poorly designed code. The act of writing tests revealed:
- Functions doing too much
- Tight coupling between components
- Hidden dependencies
- Poor error handling
Each refactoring to make code testable also made it better.
Shared test data through pytest fixtures eliminated duplication:
@pytest.fixture
def authenticated_client(api_client, test_user):
"""Returns an authenticated API client"""
api_client.force_authenticate(user=test_user)
return api_client
@pytest.fixture
def admin_client(api_client, admin_user):
"""Returns an authenticated admin client"""
api_client.force_authenticate(user=admin_user)
return api_clientThis pattern made tests readable and maintainable.
Testing multiple scenarios without code duplication:
@pytest.mark.parametrize('username,password,expected_status', [
('validuser', 'validpass', 200),
('validuser', 'wrongpass', 401),
('', 'password', 400),
('user', '', 400),
('nonexistent', 'pass', 401),
])
def test_login_scenarios(api_client, username, password, expected_status):
response = api_client.post('/api/login/', {
'username': username,
'password': password
})
assert response.status_code == expected_statusOne test function, five scenarios covered.
Some tests took hours to get right:
- Write test
- Run test → Fails unexpectedly
- Debug Django internals
- Realize assumption was wrong
- Refactor test
- Repeat
But each iteration deepened understanding.
# pytest.ini
[pytest]
DJANGO_SETTINGS_MODULE = myproject.settings
python_files = tests.py test_*.py *_tests.py
addopts =
--cov=.
--cov-report=html
--cov-report=term-missing
--cov-config=.coveragerc
--strict-markers
--tb=short
-v
-ra# .coveragerc
[run]
source = .
omit =
*/migrations/*
*/tests/*
*/test_*.py
manage.py
*/venv/*
*/virtualenv/*
[report]
precision = 2
show_missing = True
skip_covered = False# tests/factories.py
import factory
from factory.django import DjangoModelFactory
from faker import Faker
fake = Faker()
class UserFactory(DjangoModelFactory):
class Meta:
model = 'users.CustomUser'
username = factory.LazyAttribute(lambda _: fake.user_name())
email = factory.LazyAttribute(lambda _: fake.email())
first_name = factory.LazyAttribute(lambda _: fake.first_name())
last_name = factory.LazyAttribute(lambda _: fake.last_name())
@factory.post_generation
def password(obj, create, extracted, **kwargs):
if create:
obj.set_password(extracted or 'testpass123')
obj.save()The true value of 100% coverage isn't the badge or the bragging rights. It's the confidence to:
- Refactor without fear — Tests catch regressions immediately
- Deploy with certainty — Every code path has been verified
- Collaborate safely — New contributors can't break existing functionality
- Learn continuously — Public repository shows my growth journey
- Set standards — Demonstrates professional-grade practices
This template is now:
- ✅ Production-ready
- ✅ Battle-tested
- ✅ Thoroughly documented through tests
- ✅ Safe to extend
- ✅ An example for others
Achieving 100% coverage is a milestone, not a destination. The real challenge is maintaining it:
# .github/workflows/tests.yml
name: Django Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests with coverage
run: pytest --cov --cov-fail-under=100Key principle: No PR merges unless tests pass and coverage remains 100%.
[](https://github.com/Sublian/django-docker-postgres_basic)Public accountability drives discipline.
Tests become documentation:
def test_admin_can_delete_any_product():
"""
Business Rule: Admin users have unrestricted access to all products
Verified: Admins can delete products regardless of owner
Edge Case: Attempting to delete non-existent product returns 404
"""This project exists in the public domain intentionally. It's not just a template — it's a learning artifact, visible proof of growth through deliberate practice.
The Meta-Learning:
- Transparency: Publishing progress creates accountability
- Iteration: Public commits show the evolution from 65% to 100%
- Community: Others can learn from both successes and mistakes
- Standards: Setting a high bar for myself pushes continuous improvement
The Autodidact's Creed:
"I don't just want to write code that works. I want to write code I can prove works. I want to write code others can trust. I want to write code that teaches."
Reaching 100% test coverage wasn't about a metric. It was about developing a mindset:
- Discipline: Writing tests even when you "just want to ship"
- Obsession: Not settling for "good enough"
- Growth: Treating each challenge as a learning opportunity
- Standards: Holding yourself accountable to professional practices
This template — fully tested, documented, and production-ready — represents more than code. It represents a commitment to excellence in software engineering.
tests/
├── conftest.py # Shared fixtures and configuration
├── factories.py # Factory Boy data generators
├── test_auth.py # Authentication flow tests
├── test_jwt.py # JWT token lifecycle tests
├── test_permissions.py # Role-based access control
├── test_rate_limiting.py # Throttling and security
├── test_products.py # Product CRUD operations
├── test_users.py # User management
├── test_integration.py # End-to-end scenarios
└── test_edge_cases.py # Error handling and boundaries
1. Fixture-Based Setup
@pytest.fixture
def api_client():
return APIClient()
@pytest.fixture
def test_user(db):
return UserFactory()2. Parametrized Testing
@pytest.mark.parametrize('role,can_delete', [
('admin', True),
('staff', False),
('client', False),
])
def test_deletion_permissions(role, can_delete):
# Test implementation3. Mocking External Dependencies
@patch('users.services.send_email')
def test_user_registration_sends_email(mock_send):
# Test that email is called
assert mock_send.calledFrom 65% to 100%. From functional to excellent. From coder to engineer.
This is Level 2 — Professional Testing Architecture.
Repository: django-docker-postgres_basic
Author: Luis Gonzalez (@Sublian)
Date: December 18, 2025
Achievement Unlocked: 🏆 100% Test Coverage