This document covers how AI tools were used throughout the development of the Workforce Management Platform, as required by the assignment.
| Tool | Model | Purpose |
|---|---|---|
| Claude.ai | Claude Sonnet 4.5 | Primary tool — architecture planning, code generation, debugging, documentation |
| GitHub Copilot | GPT-4o | In-editor autocomplete for boilerplate, repetitive patterns |
Claude was the primary driver of this project. It was used conversationally — the entire development process was guided through a single long conversation where each phase was planned and implemented step by step.
AI was used heavily in the planning phase before writing a single line of code.
- “How should the project be structured using Clean Architecture to maintain separation of concerns?” The AI suggested organizing the solution into Domain, Application, Infrastructure, and Presentation layers, where the Domain layer contains business entities and rules, the Application layer handles use cases and interfaces, the Infrastructure layer manages external concerns such as databases and APIs, and the Presentation layer exposes the system through APIs or UI. This structure improves maintainability and testability.
- “What responsibilities should Worker 1 and Worker 2 have, and how can different programming languages be justified for them?” The discussion clarified the distinction between reactive and proactive processing. Worker 1 was designed to be event-driven, reacting to incoming events in real time, while Worker 2 was implemented as a scheduled worker responsible for periodic background tasks.
- “How should we determine which data is better suited for SQL versus MongoDB based on the system requirements?” The AI explained the advantages of using embedded documents for leave approval history, which influenced the final decision to store hierarchical approval data in MongoDB while keeping structured relational data in SQL.
- “What is the best approach to keep the domain layer independent of database technologies in a .NET application?” The AI recommended implementing the Repository interface pattern, allowing the domain and application layers to remain database-agnostic without directly depending on EF Core or any specific persistence framework.
- “What CI/CD pipeline strategy should be used to automate build, testing, and deployment?” The AI recommended setting up a CI/CD pipeline that automatically triggers on code commits, performs build and unit tests, runs static analysis, and then deploys the application to staging or production environments. This approach ensures faster feedback, consistent deployments, and improved code quality.
The following were generated by Claude with minimal modification:
- Domain Entities — all domain entities.
- EF Core DbContext —
WorkforceDbContextwith all entity configurations (relationships, indexes, Skills array conversion) - Application Services — all service implementations with mapping logic
- API Controllers — all 8 controllers with full CRUD, validation, and error handling
- Seed Data — 51 employees across 8 departments with realistic data, 6 projects, 19 tasks, 8 leave requests
- React Frontend — all pages, components, API client, TypeScript types, routing, layout
- Node.js Report Worker — aggregator queries, scheduler, db connection pooling
- Docker Compose — all services with health checks, dependencies, environment variables
- GitHub Actions CI — all pipeline jobs
The following required significant manual intervention:
appsettings.jsonfiles — connection strings adjusted for local environmentRabbitMQcredentials — had to manually reset viarabbitmqctlafter volume conflictsAuditLog.csBsonId — AI initially generatedObjectIdbut the Audit Worker wrote GUID strings; manually changed toBsonType.StringAuditConsumerServicerename — renamed toRabbitMqConsumerfor consistency- Repository implementations — Provide instructions to the AI on how the Repository should be implemented using abstraction layers.
All AI-generated code was reviewed by:
- Reading through the generated file before saving
- Running
dotnet buildafter every .NET file - Running
npm run devafter every frontend change - Testing each endpoint in Swagger before moving to next feature
- Checking MongoDB Compass to verify documents were written correctly
Problem: The Audit Worker kept getting
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN
despite correct credentials in appsettings.json.
AI Initial Suggestion: Change connection string format, add
authSource=admin, restart container.
Why It Failed: The RabbitMQ Docker volume had already been initialized with different credentials (rmq_user/rmq_pass). Since the volume already existed, Docker ignored the RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS environment variables on subsequent container starts, meaning the new credentials were never applied. Additionally, the RabbitMQ Desktop version was running in the background on the local machine, which caused the AMQP port (5672) to be occupied and blocked the Docker container from using it.
Resolution: Manually ran:
docker volume rm workforce-platform-main_rabbitmq_data
docker compose up rabbitmq -dThis forced RabbitMQ to reinitialize with the new credentials.
Also discovered the volume was named workforce-platform-main_*
from an earlier folder name, not workforce-platform_*.Additionally, I stopped the RabbitMQ Desktop application that was running in the background.
Lesson: The AI partially identified the volume-related issue after reviewing the full error output. However, reaching the correct diagnosis required prior experience and consulting the RabbitMQ documentation.
Problem: GET /api/v1/auditlogs returned 500.
Root Cause: The API's AuditLog.cs entity had
[BsonId] [BsonRepresentation(BsonType.ObjectId)] but the
Audit Worker stored documents using a GUID string as _id
(for idempotency). MongoDB threw a deserialization exception
trying to parse a GUID as an ObjectId.
AI Initial Suggestion: Change to BsonType.String.
Resolution: Updated AuditLog.cs to use BsonType.String
and dropped the AuditLogs collection to remove documents
with mismatched _id formats.
Lesson: Ensure MongoDB entity mappings match the actual stored data, especially _id types, to prevent deserialization errors. AI suggestions help, but validating against the database schema is essential.
Problem: After installing RabbitMQ.Client v7, the
generated code for RabbitMqConsumer used v6 API:
IModel, EventingBasicConsumer, DispatchConsumersAsync,
CreateConnection() — none of which exist in v7.
Resolution: During debugging, it was discovered that the methods did not exist in the current library version.
After consulting the official RabbitMQ documentation, the v6→v7 breaking changes were identified.
The implementation was then updated with a complete rewrite using the new async API,
including IChannel, AsyncEventingBasicConsumer, CreateConnectionAsync(), and CreateChannelAsync().
Lesson: AI-generated code should always be verified against the latest library documentation, especially when major version changes may introduce breaking API updates.
Only Claude Sonnet 4.5 was used as the primary model throughout.
Strengths:
- Excellent at generating consistent, well-structured .NET code following Clean Architecture patterns across many files
- Remembered context well across a very long conversation — kept track of folder structure, naming conventions, and architectural decisions made many steps earlier
- When given full error output (stack traces, exception messages), diagnosis was accurate and targeted
- Proactively flagged issues — e.g. warned that
TaskStatusconflicts withSystem.Threading.Tasks.TaskStatusbefore it caused a compilation error
Weaknesses:
- Occasionally generated code for the wrong version of a library (RabbitMQ.Client v6 vs v7)
- MongoDB BSON serialization edge cases (ObjectId vs string _id, camelCase vs PascalCase) required multiple iterations to fully resolve
- Some fixes were suggested but not consistently applied across all affected files — required re-scanning the repo to catch
- Long conversation context occasionally caused earlier fixes to be "forgotten" in later responses
- Scaffolding speed — generating the entire Clean Architecture folder structure, all entity classes, repositories, services, and controllers in sequence was dramatically faster than writing by hand. Estimated 70% time saving on boilerplate.
- Consistency — naming conventions, namespace patterns, and architectural decisions were applied uniformly across all files because the AI maintained context throughout
- Cross-stack knowledge — switching between .NET, Node.js, React TypeScript, Docker, and GitHub Actions without losing context was the biggest productivity gain
- Environment-specific issues — AI could not fully diagnose problems
- like Docker volume persistence, Windows-specific path conflicts, or RabbitMQ credential
- initialization without access to the actual system environment.
- Cross-service serialization contracts — AI missed the runtime issue caused by differences in JSON serialization—Node.js using camelCase
- versus the .NET MongoDB driver deserializing with PascalCase. A thorough review of data contracts would have caught this earlier.
- Test coverage — AI did not proactively suggest writing tests. Given more time, unit tests for services and integration tests for repositories would be added.
Verify environment setup early (Docker, ports, credentials). Ensure cross-service data contracts match (serialization/deserialization). Add unit and integration tests to catch issues proactively. Check library versions and breaking changes before implementation.