Module: 3
Duration: 45 minutes
Part: Advanced GitHub Copilot (Part 2)
By the end of this lab, you will:
- Apply custom agents to real development workflows
- Compare agent outputs to standard Copilot Chat responses
- Evaluate reliability and consistency differences
- Understand when agents provide meaningful value over ad-hoc prompting
- Completion of Lab 07: Custom Agents Intro
- VS Code with GitHub Copilot extension
- Access to the TaskManager workshop repository
- Familiarity with the three custom agents (Architecture Reviewer, Backlog Generator, Test Strategist)
You'll work through 3 workflow scenarios, each testing a different agent. For each scenario, you'll:
- First: Use standard Copilot Chat (no agent)
- Second: Use the appropriate custom agent
- Compare: Document differences in quality, structure, and consistency
Your team wants to add a notification system to the TaskManager application. Users should receive notifications when:
- A task is assigned to them
- A task deadline is approaching
- A task status changes
You need to break this down into user stories with acceptance criteria.
Instructions:
- Open Copilot Chat (no agent selected)
- Use this prompt:
Create user stories for a notification system in the TaskManager app.
Users should get notifications when tasks are assigned, deadlines approach, or status changes.
Include acceptance criteria.
- Record the output (copy/paste into a document or note the structure)
Questions to consider:
- How are the stories formatted?
- Are acceptance criteria specific and testable?
- Is the output consistent with agile best practices?
Instructions:
- Switch to Agent Mode
- Select Backlog Generator from the agent dropdown
- Use the same (or similar) prompt:
Create user stories for a notification system in the TaskManager app.
Users should get notifications when tasks are assigned, deadlines approach, or status changes.
- Record the output
Expected agent behavior:
- User stories in "As a... I want... So that..." format
- Specific, testable acceptance criteria
- Story points or sizing estimates
- Dependencies identified
- INVEST principles applied
| Aspect | Standard Chat | Backlog Generator Agent |
|---|---|---|
| Story Format | [Your observation] | [Your observation] |
| Acceptance Criteria Quality | [Your observation] | [Your observation] |
| Completeness | [Your observation] | [Your observation] |
| Consistency | [Your observation] | [Your observation] |
| Ready for Sprint Planning? | [Your observation] | [Your observation] |
Reflection:
- Which output would you prefer to present to your product owner?
- Would the agent output save you revision time?
- How much manual cleanup is needed in each case?
A team member has submitted a pull request that adds a new NotificationService in the Application layer. You want to ensure it follows Clean Architecture and DDD patterns before approving.
Create a sample file to review (or use an existing Application service):
File: src/TaskManager.Application/Services/NotificationService.cs
namespace TaskManager.Application.Services;
public class NotificationService
{
private readonly ITaskRepository _taskRepository;
private readonly IEmailService _emailService;
public NotificationService(ITaskRepository taskRepository, IEmailService emailService)
{
_taskRepository = taskRepository;
_emailService = emailService;
}
public async Task NotifyTaskAssignedAsync(int taskId, string assigneeEmail)
{
var task = await _taskRepository.GetByIdAsync(taskId);
if (task != null)
{
await _emailService.SendAsync(assigneeEmail, "Task Assigned", $"You have been assigned task: {task.Title}");
}
}
}Instructions:
- Open Copilot Chat (no agent)
- Open the
NotificationService.csfile - Prompt:
Review this NotificationService for Clean Architecture compliance and suggest improvements.
- Record the feedback
Instructions:
- Switch to Agent Mode
- Select Architecture Reviewer from dropdown
- Same prompt:
Review this NotificationService for Clean Architecture compliance and suggest improvements.
- Record the feedback
Expected agent behavior:
- Structured review (Strengths, Concerns, Violations, Recommendations)
- Layer-specific analysis (Application layer rules)
- DDD pattern evaluation
- Dependency direction checks
- Specific, actionable recommendations
| Aspect | Standard Chat | Architecture Reviewer Agent |
|---|---|---|
| Review Structure | [Your observation] | [Your observation] |
| Depth of Analysis | [Your observation] | [Your observation] |
| Actionable Recommendations | [Your observation] | [Your observation] |
| Consistency with Standards | [Your observation] | [Your observation] |
| Ready to Use in PR Review? | [Your observation] | [Your observation] |
Reflection:
- Did the agent identify issues standard chat missed?
- Is the agent's format more useful for code review comments?
- Would you trust this agent's review as a first pass?
You're implementing a new feature: Task Assignment. A task can be assigned to a user, and the assignment should:
- Validate that the user exists
- Validate that the task exists
- Record assignment timestamp
- Emit a domain event (
TaskAssigned)
You need a test strategy.
Instructions:
- Open Copilot Chat (no agent)
- Prompt:
Propose test scenarios for a task assignment feature.
Validate user and task exist, record timestamp, emit domain event.
- Record the test scenarios
Instructions:
- Switch to Agent Mode
- Select Test Strategist from dropdown
- Same prompt:
Propose test scenarios for a task assignment feature.
Validate user and task exist, record timestamp, emit domain event.
- Record the test scenarios
Expected agent behavior:
- Categorized tests (unit, integration, edge cases)
- AAA pattern (Arrange, Act, Assert) descriptions
- Specific test names
- Edge cases and boundary conditions
- Error handling scenarios
- Testability recommendations
| Aspect | Standard Chat | Test Strategist Agent |
|---|---|---|
| Test Organization | [Your observation] | [Your observation] |
| Coverage Completeness | [Your observation] | [Your observation] |
| Edge Case Identification | [Your observation] | [Your observation] |
| Test Naming Clarity | [Your observation] | [Your observation] |
| Ready to Implement? | [Your observation] | [Your observation] |
Reflection:
- Did the agent identify edge cases you hadn't considered?
- Is the agent's categorization helpful?
- Would you feel confident implementing tests from the agent's output?
Share your findings:
- Which agent provided the most value?
- Did agents catch issues that standard chat missed?
- Were the structured outputs more useful than free-form responses?
- What are the limitations of agents?
✅ Structured, repeatable workflows (reviews, planning, analysis)
✅ Consistency across team members (same agent = same format)
✅ Encoding domain expertise (architecture, testing, product practices)
✅ First-pass automation (reduce manual work)
❌ Always correct - You're still accountable
❌ Replacements for human judgment - They're assistants
❌ One-size-fits-all - Use the right agent for the job
- Code reviews (automated first pass)
- Backlog grooming (consistent story format)
- Test planning (comprehensive coverage)
- Documentation generation (structured outputs)
- Knowledge transfer (encode team practices)
Try this:
Create a custom scenario for your own work:
- Identify a repetitive workflow in your daily work
- Use standard Chat to perform it
- Then use the most relevant agent (or imagine what agent you'd need)
- Compare the results
Bonus: Draft an agent definition for a workflow you need. (You'll formalize this in Labs 08-09.)
In Lab 09: Agent Design, you'll learn how to design effective agents — instruction components, iteration loops, and governance.