Skip to content

Tayyabah-Rehman/Effect-of-Regularization-and-Dropout

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Effect-of-Regularization-and-Dropout

A comparative study analyzing how L2 regularization and Dropout individually and combined affect neural network generalization, bias-variance tradeoff, and overfitting prevention.

📊 Project Overview

This project implements a 3-layer neural network and compares four different regularization approaches:

Model Train Accuracy Test Accuracy Observation
No Regularization / No Dropout 94.8% 91.5% Overfitting
Only L2 Regularization 93.8% 93.0% Balanced
Only Dropout 92.9% 95.0% Underfitting
Both Regularization & Dropout 98.0% 95.0% Best Generalization

🔬 Bias-Variance Analysis

Approach Bias Variance Status
No Regularization / No Dropout Low High Overfitting
Only L2 Regularization Moderate Moderate Balanced
Only Dropout High Low Underfitting
Both Methods Low Low Improved Generalization

📈 Training Log (Last 5 of 100 Epochs - Both Methods)

Epoch Loss Train Accuracy
96 0.1067 96.00%
97 0.0906 97.27%
98 0.0915 96.79%
99 0.0938 97.09%
100 0.1018 96.36%

🧠 Techniques Implemented

L2 Regularization

  • Adds penalty term to cost function: λ/2m * Σ||W||²
  • Prevents excessive reliance on specific features
  • Gradient update includes regularization term: dW = 1/m * np.dot(dZ, A_prev.T) + (lambd * W)/m

Dropout

  • Randomly shuts down neurons during training with probability keep_prob
  • Forces network to learn redundant representations
  • Scales activations: A = A / keep_prob to maintain expected values

Both Combined

  • Uses L2 regularization to constrain weights
  • Uses Dropout to prevent co-adaptation
  • Achieves best generalization performance

📊 Test Results (Best Model)

Overall Test Accuracy: 95.0%

🚀 Getting Started

Prerequisites

pip install numpy matplotlib scikit-learn

Run the Project

  • Clone the repository
  • Ensure reg_utils.py and testCases.py are in the same directory
  • Run the Jupyter notebook A3_DL_Tayyabah Rehman_017.ipynb
  • The notebook contains implementations of: Forward/backward propagation L2 regularization Dropout
  • Comparison of all four approaches

Key Conclusions

  • Combining regularization and dropout significantly improves generalization
  • The combined model achieved 98% train accuracy and 95% test accuracy
  • L2 regularization alone provides balanced bias-variance tradeoff
  • Dropout alone slightly underfits but shows robustness
  • No regularization leads to clear overfitting

About

A comparative study analyzing how L2 regularization and Dropout individually and combined affect neural network generalization, bias-variance tradeoff, and overfitting prevention.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors