Clone the repository
git clone https://github.com/uchiha-vivek/Swades-AI-YC-W22-.git .Install all the dependencies from the root
npm iNavigate to apps where the ui and backend is present
cd appsNavigate to api folder
cd api
npm iNavigate to web folder
cd web
npm iNOTE - Here api is the backend folder and web is the frontend folder
Inside packages folder all the Database logic is used
We have used database as PostgreSQL and ORM as Prisma
How to setup prisma stuff
cd packages
npx prisma studioAlso from the root you can run if you have install the dependencies
npm run devNOTE - Here we are using const TEST_USER_ID = '55968d8b-a43a-48e6-bb0e-bd61ff16eb59' as userid for now.
This has been done to make sure that a single user in inside his own environment, (assume like he has done authentication and authorized inside his workspace since we not doing any authentication from frontend)
AI Model - For local testing Ollama models can also be used
AGENT ROUTES
Getting all the available agents
Method - GET
http://localhost:3001/api/agentsGet Agent Capabilities
METHOD -GET
Order Agent
http://localhost:3001/api/agents/order/capabilitiesBilling Agent
http://localhost:3001/api/agents/billing/capabilitiesSupport Agent
http://localhost:3001/api/agents/support/capabilitiesChecking Rate Limiting
http://localhost:3001/api/chat/messagesIf more that 20 requests are send under 60000ms then Error 429 Too many Requests will come
Chat Routes
NOTE - This id is demo for now and in real production system this id can be generated after authentication so that it maps correctly to the specific user
Send New message
POST http://localhost:3001/api/chat/messagesThe body which we are sending
{
"userId": "55968d8b-a43a-48e6-bb0e-bd61ff16eb59",
"message": "Where is my order?"
}This id we will be getting from prisma studio
Expected Response
{
"conversationId": "UUID",
"response": "Order ORD-1001 is currently \"shipped\" (Tracking ID: TRACK-123)"
}This conversation id will be used in next iterations
For streaming we need to use curl
- Here the text streams chunk by chunk
curl -N -X POST http://localhost:3001/api/chat/messages/stream \
-H "Content-Type: application/json" \
-H "x-user-id: demo-user" \
-d '{
"userId": "<use-from-prisma-studio>",
"message": "Where is my order?"
}'Listing conversations for users
METHOD - GET
http://localhost:3001/api/chat/conversations?userId=<prisma-user-id>Getting conversation history
METHOD - GET
http://localhost:3001/api/chat/conversations/YOUR_CONVERSATION_IDDeleting conversation ID
METHOD - DELETE
http://localhost:3001/api/chat/conversations/YOUR_CONVERSATION_IDFrom the root navigate to apps/api
Run the following command
npm testBelow are the test cases screenshot
Backend Test cases
npx vitestFrontend Test cases
npx vitestDate model
The frontend has been deployed in Azure Static Web App
The backend has been deployed on Azure App Service
NOTE : Right now some minor issues exist with the backend
Logging Screenshot
High Level Design - What's happening under the hood ?
-
Set up of monorepo with HONO RPC
-
Rate Limiting implementation LINK
-
Test cases included Screenshots and commands present
-
Compaction included LINK
-
INcluded Thinking in the UI
-
Deployed Live Demo Frontend NOTE : Backend was having some issues so testing locally is preferred. Backend fix will be soon done
-
Classifying intents based on the LLM but the problem is it over complicates the stuff and break the flow .
-
Generating queries using LLM so that query can be made from PRISMA database and the response comes in natural flow . Currently the reponse is too much straight forward.
-
The RAG based architecure for data which is not-sensitive
- LangGraph multi agent system
LOOM VIDEO
POST ASSIGNMENT ENHANCEMENT
This loom video contains the enhancements
- the response coming from the database is natural
What fixes has been done ?
- backend has been deployed
- the data model is migrated to supabase due to
ipv4issues - getting natural response from the database
- routing of agents done on basis of LLM
- abstraction layer added on top of queries




