You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: posts/from-messy-data-to-production-mlops-my-network-security-journey-part-1.html
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,7 @@ <h1>Building a Bulletproof ML Pipeline: The Unseen Engineering (Part 1)</h1>
89
89
90
90
<p>This is the story of building a production-ready machine learning system that detects phishing URLs. In this first part, we'll focus on laying a robust foundation: designing a modular architecture, implementing crucial safeguards like custom logging and data validation, and establishing a reproducible experiment-driven workflow.</p>
91
91
92
-
<p><imgsrc="posts_assets/pipeline_workflow_diagram.png" alt="A diagram showing the MLOps pipeline flow from Data Ingestion to Model Pusher." /></p>
92
+
<p><imgsrc="posts_assets/pipeline_workflow_diagram.webp" alt="A diagram showing the MLOps pipeline flow from Data Ingestion to Model Pusher." /></p>
93
93
94
94
<h2>The Vision: More Than Just Another ML Project</h2>
Copy file name to clipboardExpand all lines: posts/from-messy-data-to-production-mlops-my-network-security-journey-part-2.html
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,7 @@ <h1>The Deployment Gauntlet: From <code>localhost</code> to Live on AWS (Part 2)
89
89
90
90
<p>This is where theory meets the harsh, humbling reality of production infrastructure. This is the story of the deployment gauntlet.</p>
91
91
92
-
<p><imgsrc="posts_assets/network-architecture-diagram.jpg" alt="A diagram showing the overall project architecture including AWS services, MLflow, and the FastAPI application." /></p>
92
+
<p><imgsrc="posts_assets/network-architecture-diagram.webp" alt="A diagram showing the overall project architecture including AWS services, MLflow, and the FastAPI application." /></p>
93
93
94
94
<h2>The AWS Deployment Nightmare: When the Cloud Humbles You</h2>
Copy file name to clipboardExpand all lines: posts/from-notebook-to-ui-the-local-development-journey-part-1.html
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -96,7 +96,7 @@ <h3>The First Big Question: Is My Model Actually Any Good?</h3>
96
96
<p><strong>Problem #1: My experiments were chaotic and biased.</strong>
97
97
My first attempt at a solution was to bring order to the chaos. I wrote a nested loop to systematically iterate through every combination of model (<code>EfficientNetB0</code>, <code>B2</code>, <code>B4</code>), dataset size, and training duration.</p>
98
98
99
-
<p><imgsrc="posts_assets/experiments-table.png" alt="A table showing the structured plan for all 8 experiments, varying model type, data size, and epochs." />
99
+
<p><imgsrc="posts_assets/experiments-table.webp" alt="A table showing the structured plan for all 8 experiments, varying model type, data size, and epochs." />
100
100
<em>Moving from random tweaks to a structured experiment plan like this was the first step toward building a reliable model.</em></p>
101
101
102
102
<p>This was a huge step forward! But it created a new problem: I was drowning in a sea of <code>print()</code> statements. Comparing the results of run #3 with run #17 was a nightmare of scrolling and squinting.</p>
@@ -111,7 +111,7 @@ <h3>The First Big Question: Is My Model Actually Any Good?</h3>
111
111
<p><strong>Problem #2: I couldn't visualize the story of my training.</strong>
112
112
I realized I didn't just need results; I needed a narrative. I needed to see the learning curves to understand <em>how</em> each model was behaving. This is where TensorBoard came in. But my first attempt was, again, a mess. All my logs were jumbled into one confusing timeline.</p>
113
113
114
-
<p><imgsrc="posts_assets/tensorboard-accuracy-chart.png" alt="A screenshot of the TensorBoard dashboard showing accuracy curves for multiple experiments." />
114
+
<p><imgsrc="posts_assets/tensorboard-accuracy-chart.webp" alt="A screenshot of the TensorBoard dashboard showing accuracy curves for multiple experiments." />
115
115
<em>TensorBoard made it easy to visually compare all eight experiments. The superiority of the <code>EffNetB2</code> model trained on 20% of the data (the top-performing line) became immediately obvious.</em></p>
116
116
117
117
<p>The insight wasn't just to <em>use</em> TensorBoard, but to be deliberate about <em>how</em> I organized my logs. I wrote a small utility function to create a clean, timestamped directory structure for every single run, which made the above visualization possible.</p>
@@ -129,15 +129,15 @@ <h3>The Second Big Question: How Do I Squeeze Out More Performance?</h3>
129
129
130
130
<p>My experiments were now reliable, but the best model's accuracy was still just "okay." I knew the answer was <strong>Transfer Learning</strong>, but I soon learned that knowing the name of a technique is very different from implementing it correctly.</p>
131
131
132
-
<p><imgsrc="posts_assets/feature-extraction-diagram.png" alt="A diagram showing a large pre-trained model being adapted for a new, smaller task." />
132
+
<p><imgsrc="posts_assets/feature-extraction-diagram.webp" alt="A diagram showing a large pre-trained model being adapted for a new, smaller task." />
133
133
<em>The core concept of feature extraction: keep the pre-trained 'backbone' (the feature learner) and only train a new, small 'head' (the classifier) on our specific data.</em></p>
134
134
135
135
<p><strong>Problem #3: My GPU was crying and my training was slow.</strong>
136
136
My first attempt was naive. I loaded a pre-trained <code>EfficientNet</code>, swapped the final layer for my 3-class classifier, and hit "train." My GPU fan spun up like a jet engine, and the estimated training time was in hours, not minutes.</p>
137
137
138
138
<p>The "aha!" moment came after digging into how transfer learning truly works. I was trying to retrain the entire network. The solution was to <strong>freeze the backbone</strong>.</p>
139
139
140
-
<p><imgsrc="posts_assets/torchinfo-frozen-layers.png" alt="A screenshot of the torchinfo summary showing a massive reduction in trainable parameters." />
140
+
<p><imgsrc="posts_assets/torchinfo-frozen-layers.webp" alt="A screenshot of the torchinfo summary showing a massive reduction in trainable parameters." />
141
141
<em>The proof is in the numbers. After freezing the backbone, the number of trainable parameters dropped from over 4 million to just 3,843, dramatically speeding up training.</em></p>
142
142
143
143
<pre><code># The crucial insight: only train the tiny new part of the model
0 commit comments