End-to-end system for generating personalized images using fine-tuned diffusion models and a FastAPI backend.
This project implements a system that generates personalized images of a specific subject in different contexts (e.g., superhero, pilot, beach scenes).
The system combines diffusion model fine-tuning with prompt-based generation and a backend API for controlled usage.
The model is trained to associate a special token (e.g., TOK) with a specific subject identity.
During inference, prompts including this token generate images of that subject:
Example: "A portrait of TOK as a superhero"
This enables controlled personalization without modifying the model architecture.
The system follows this pipeline:
- Collect images of a subject
- Fine-tune a diffusion model (via Replicate)
- Generate images using prompt conditioning
- Handle requests through a FastAPI backend
- Control usage with rate limiting (Upstash Redis)
Examples of generated images using personalized prompts:
Core
- Python
- FastAPI
ML & Generative AI
- Diffusion models
- Replicate API
Infrastructure
- Upstash (Redis)
- Async API handling
src/→ backend and API logicnbs/→ experimentation and development notebooksdata/references/→ visual reference images for style/contextresults/→ generated outputsrequirements.txt→ dependencies
This project was initially inspired by an online tutorial, but extended with backend integration, API usage control, and structured system design.
- Add web interface for user interaction
- Support multiple identities
- Improve prompt controllability
- Optimize fine-tuning workflow


