Skip to content

Latest commit

 

History

History
15 lines (8 loc) · 838 Bytes

File metadata and controls

15 lines (8 loc) · 838 Bytes

🚀 Making Models Efficient

This repository hosts multiple projects focused on building, compressing, and optimizing deep learning models for better speed, memory efficiency, and deployability — without sacrificing too much performance.


📁 Projects

A complete pipeline demonstrating knowledge distillation using a custom Vision Eagle Attention (VEA)-based teacher and a lightweight CNN student. Includes performance comparison in terms of accuracy, latency, size, and parameter count.

A study on applying fine-grained weight pruning to a ResNet-18 model (6-class classification). The aim was to investigate whether pruning could improve generalization while reducing parameter usage.