@@ -46,8 +46,7 @@ In this exercise you use the Security Reviewer Agent to scan the sample app sour
4646
47474 . Note the severity level assigned to each finding. Critical and High findings represent immediate risks that should be addressed before deployment.
4848
49- > [ !NOTE]
50- > Screenshot placeholder: `  `
49+ ![ Security reviewer agent output in Copilot Chat] ( ../images/lab-03/lab-03-security-agent-findings.png )
5150
5251### Exercise 3.2: Infrastructure Security Scanning
5352
@@ -69,8 +68,7 @@ Next, scan the infrastructure-as-code template for security misconfigurations.
6968
70693 . For each finding, note the line number in ` main.bicep ` and the recommended remediation.
7170
72- > [ !NOTE]
73- > Screenshot placeholder: `  `
71+ ![ IaC security agent findings for main.bicep] ( ../images/lab-03/lab-03-iac-scan.png )
7472
7573### Exercise 3.3: Supply Chain Security
7674
@@ -91,8 +89,7 @@ Now analyze the project dependencies for known vulnerabilities and license risks
9189
92903 . Note which dependencies the agent flags and the recommended upgrade paths.
9391
94- > [ !NOTE]
95- > Screenshot placeholder: `  `
92+ ![ Supply chain agent findings] ( ../images/lab-03/lab-03-supply-chain.png )
9693
9794### Exercise 3.4: Compare Findings Against Known Issues
9895
@@ -115,8 +112,7 @@ In Lab 01 you manually reviewed the sample app and identified intentional vulner
115112 * Did you spot anything in Lab 01 that the agents did not flag?
116113 * How does automated agent scanning complement manual code review?
117114
118- > [ !NOTE]
119- > Screenshot placeholder: `  `
115+ ![ Findings compared to known issues] ( ../images/lab-03/lab-03-comparison.png )
120116
121117## Verification Checkpoint
122118
0 commit comments