Skip to content

g3r4benitez/consume_events-process_it-publish_them

Repository files navigation

Fever Plans Microservice

Author: Gerardo Benitez
Date: August 2025
Version: 1.0.0

High-performance microservice for integrating external provider plans into the Fever marketplace with master-slave PostgreSQL replication, Redis caching, and async task processing.

Fever Architecture

Data Ingest

API

Data Flow

  1. Ingest Service fetches XML from External Provider API
  2. Parses XML and publishes events to Celery via RabbitMQ
  3. Celery Workers process events and upsert to PostgreSQL Master
  4. PostgreSQL Master replicates data to Slave (streaming replication)
  5. FastAPI reads from Slave (performance) and writes to Master
  6. Redis caches frequent queries for sub-second response times
  7. Clients consume REST API for plan searches

Scalability Features

  • Read/Write Separation: Master-Slave PostgreSQL setup
  • Caching Layer: Redis for high-performance queries
  • Async Processing: Celery + RabbitMQ for background tasks
  • Horizontal Scaling: Stateless services ready for load balancing
  • Fault Tolerance: Service isolation and graceful degradation
  • Performance: Sub-second API responses via caching strategy

Main Components:

FastAPI App - API REST Main (port 8000)

PostgreSQL Master - Principal DB for writing (port 5432)

PostgreSQL Slave - Replica read only (port 5433)

Redis - Cache for frequently asked questions (port 6379)

Ingest Service - Periodic synchronization with external API

Celery Worker - Asynchronous task processing

RabbitMQ - Message broker for Celery (port 5672)

Data flow:

Read: Client → FastAPI → Redis Cache → PostgreSQL Slave

Write: Ingest → RabbitMQ → Celery → PostgreSQL Master → Replication → Slave

Cache: Frequent queries stored in Redis with TTL

Quick Start

Prerequisites

  • Docker & Docker Compose
  • Python 3.10+
  • Git

Installation & Setup

# Clone repository
git clone <repository-url>
cd fever

# Install dependencies
make install

# Init the ingest service
make ingest

# Run the application
make run

Docker Services

# Start all services
docker-compose up

# Start specific services
docker-compose up postgres-master postgres-slave redis
docker-compose up api ingest celery

API Documentation

Endpoints

Search Plans

GET /search?starts_at=2023-01-01T00:00:00&ends_at=2023-12-31T23:59:59

Parameters:

  • starts_at (datetime): Start date filter (ISO format)
  • ends_at (datetime): End date filter (ISO format)

Response:

{
  "data": {
    "events": [
      {
        "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
        "title": "string",
        "start_date": "2025-08-26",
        "start_time": "22:38:19",
        "end_date": "2025-08-26",
        "end_time": "14:45:15",
        "min_price": 0,
        "max_price": 0
      }
    ]
  },
  "error": null
}

Health Check

GET /health

Response:

{"status": "healthy"}

Interactive Documentation

  • Swagger UI: http://localhost:8000/docs
  • ReDoc: http://localhost:8000/redoc

Environment Configuration

Required Environment Variables

DATABASE_WRITE_URL=postgresql+asyncpg://user:password@postgres-master:5432/fever_db
DATABASE_READ_URL=postgresql+asyncpg://user:password@postgres-slave:5432/fever_db
REDIS_URL=redis://redis:6379/0
EXTERNAL_API_URL=https://provider.code-challenge.feverup.com/api/events
SYNC_INTERVAL_SECONDS=300
API_TIMEOUT_SECONDS=30

Local Development

Create .env file in project root with above variables adjusted for local setup.

Testing

Unit Tests

# Run all tests
pytest

# Run specific test file
pytest tests/test_celery_worker.py -v

# Run with coverage
pytest --cov=app tests/

Integration Tests

# Test API endpoints
curl "http://localhost:8000/search?starts_at=2023-01-01T00:00:00&ends_at=2023-12-31T23:59:59"

# Test health endpoint
curl http://localhost:8000/health

Monitoring & Observability

Health Checks

  • API Health: GET /health
  • Database: Built-in healthchecks in docker-compose
  • Redis: Connection monitoring
  • Celery: Worker status monitoring

Logging

  • Application Logs: Structured JSON logging see logs:

docker-compose logs -f api

  • Celery Logs: Task execution and error tracking see logs:

docker-compose logs -f celery

  • Redis RabbitMQ: see logs:

docker-compose logs -f rabbit

  • Database Logs: Query performance monitoring

Metrics (Recommended)

  • Response time monitoring
  • Cache hit/miss ratios
  • Database connection pool status
  • Celery task queue length

Performance Optimization

Database

  • Indexes: Optimized for time-range queries
  • Connection Pooling: Async connection management
  • Read Replicas: Separate read/write operations

Caching Strategy

  • Redis TTL: 5-minute cache for search results
  • Cache Keys: Based on query parameters
  • Cache Invalidation: Automatic on data updates

API Performance

  • Async Operations: Non-blocking I/O throughout
  • Response Compression: Gzip compression enabled
  • Connection Keep-Alive: HTTP/1.1 persistent connections

Deployment

Production Considerations

Security

  • Use environment-specific credentials
  • Enable SSL/TLS for all connections
  • Implement API rate limiting
  • Set up proper firewall rules

Scaling

# Scale API instances
docker-compose up --scale api=3

# Scale Celery workers
docker-compose up --scale celery=5

Database Backup

# Backup master database
docker exec fever_postgres-master_1 pg_dump -U user fever_db > backup.sql

# Restore from backup
docker exec -i fever_postgres-master_1 psql -U user fever_db < backup.sql

Development

Code Structure

app/
├── core/           # Celery configuration
├── services/       # Business logic
    ├── external_api.py # External API client
    ├── plans_services.py # Get plans Service
    ├── sync_services.py # Launch Background Sync
├── models.py       # Database models
├── schemas.py      # Pydantic schemas
├── database.py     # Database connections
├── cache.py        # Redis cache service
└── main.py         # FastAPI application

tests/
├── test_celery_worker.py
└── ...

Contributing

  1. Fork the repository
  2. Create feature branch
  3. Add tests for new functionality
  4. Ensure all tests pass
  5. Submit pull request

Code Quality

  • Follow PEP 8 style guidelines
  • Add type hints
  • Write comprehensive tests
  • Document complex functions

Notes: This architecture is designed for high availability, scalability and optimal performance.

About

System Design and Proposal for an scalable ingest events and API

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages