A computer vision project that applies YOLOv9 object detection for automated cephalometric analysis in dental and orthodontic imaging. This project enables precise detection and measurement of anatomical landmarks in cephalometric X-ray images.
- Overview
- Features
- Requirements
- Installation
- Project Structure
- Usage
- Model Configuration
- Detection Parameters
- Output Formats
- Contributing
- License
Cephalometry is a diagnostic tool used in orthodontics and oral surgery to analyze the relationships between dental and skeletal structures. This project leverages the power of YOLOv9, a state-of-the-art object detection model, to automatically identify and locate anatomical landmarks in cephalometric radiographs.
- Automated Analysis: Reduces manual measurement time and human error
- High Precision: YOLOv9's advanced architecture ensures accurate landmark detection
- Clinical Integration: Designed for integration into dental practice workflows
- Flexible Output: Multiple output formats for different use cases
- Real-time Detection: Fast inference on cephalometric images
- Multiple Input Sources: Support for images, videos, webcam, and URLs
- Customizable Confidence Thresholds: Adjustable detection sensitivity
- Multiple Output Formats:
- Annotated images with bounding boxes
- Text files with coordinates
- Cropped landmark regions
- GPU Acceleration: CUDA support for faster processing
- Batch Processing: Process multiple images simultaneously
- Python 3.8 or higher
- CUDA-compatible GPU (recommended for faster inference)
- Minimum 8GB RAM
- OpenCV-compatible system
torch>=1.8.0
torchvision>=0.9.0
opencv-python>=4.5.0
numpy>=1.21.0
Pillow>=8.0.0
PyYAML>=5.4.0
tqdm>=4.60.0
matplotlib>=3.3.0
seaborn>=0.11.0
pandas>=1.3.0
pathlib
argparseultralytics
thop
tensorboard
protobuf<4.21.3git clone https://github.com/selvatharrun/cephalometry-using-yolov9-.git
cd cephalometry-using-yolov9-python -m venv cephalo_env
source cephalo_env/bin/activate # On Windows: cephalo_env\Scripts\activatepip install -r requirements.txt# For CUDA 11.8
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# For CPU only
pip install torch torchvision torchaudio# Download YOLOv9 weights (replace with your trained cephalometry model)
wget https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9c.ptcephalometry-using-yolov9-/
βββ detect.py # Main detection script
βββ models/
β βββ common.py # Model architecture components
β βββ ...
βββ utils/
β βββ dataloaders.py # Data loading utilities
β βββ general.py # General utility functions
β βββ plots.py # Visualization utilities
β βββ torch_utils.py # PyTorch utilities
βββ data/
β βββ images/ # Input images directory
β βββ coco.yaml # Dataset configuration
βββ runs/
β βββ detect/ # Output directory
βββ requirements.txt # Python dependencies
βββ README.md # Project documentation
python detect.py --source data/images --weights yolo.ptpython detect.py --source path/to/image.jpg --weights yolo.pt --conf-thres 0.5python detect.py --source path/to/video.mp4 --weights yolo.pt --save-txtpython detect.py --source 0 --weights yolo.pt --view-imgpython detect.py \
--source data/images \
--weights yolo.pt \
--project runs/cephalometry \
--name experiment1 \
--save-txt \
--save-conf \
--conf-thres 0.6 \
--iou-thres 0.4# Modify for cephalometric landmarks
names:
0: nasion
1: sella
2: orbitale
3: porion
4: anterior_nasal_spine
5: posterior_nasal_spine
6: pogonion
7: menton
8: gonion
9: articulare| Parameter | Description | Default | Recommended Range |
|---|---|---|---|
--conf-thres |
Confidence threshold | 0.25 | 0.3-0.7 |
--iou-thres |
IoU threshold for NMS | 0.45 | 0.4-0.6 |
--imgsz |
Input image size | 640 | 640-1280 |
--max-det |
Maximum detections | 1000 | 50-200 |
--line-thickness |
Bounding box thickness | 1 | 1-5 |
- Images with bounding boxes around detected landmarks
- Confidence scores displayed
- Saved in
runs/detect/exp/
Format: class x_center y_center width height confidence
0 0.5234 0.3456 0.0234 0.0345 0.89
1 0.6123 0.4567 0.0198 0.0276 0.92
- Individual landmark regions saved as separate images
- Useful for detailed analysis
- Prepare cephalometric dataset with landmark annotations
- Configure dataset YAML file
- Train using YOLOv9 training script
- Replace
yolo.ptwith your trained weights
- Modify confidence thresholds based on clinical requirements
- Customize landmark classes in configuration
- Implement post-processing for measurements
- Fork the repository
- Create a feature branch (
git checkout -b feature/improvement) - Commit changes (
git commit -am 'Add new feature') - Push to branch (
git push origin feature/improvement) - Create Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- YOLOv9 team for the excellent object detection framework
- OpenCV community for computer vision tools
- PyTorch team for the deep learning framework
For more detailed information about YOLOv9 architecture and training procedures, refer to the original YOLOv9 documentation.