Skip to content

uchiha-vivek/Swades-AI-YC-W22-

Repository files navigation

swades.ai Full Stack Engineering Assessment

swades-ai

Steps to run the project

Clone the repository

git clone https://github.com/uchiha-vivek/Swades-AI-YC-W22-.git .

Install all the dependencies from the root

npm i

Navigate to apps where the ui and backend is present

cd apps

Navigate to api folder

cd api
npm i

Navigate to web folder

cd web
npm i

NOTE - Here api is the backend folder and web is the frontend folder

Inside packages folder all the Database logic is used

We have used database as PostgreSQL and ORM as Prisma

How to setup prisma stuff

cd packages
npx prisma studio

Also from the root you can run if you have install the dependencies

npm run dev

API Usage and testing

NOTE - Here we are using const TEST_USER_ID = '55968d8b-a43a-48e6-bb0e-bd61ff16eb59' as userid for now. This has been done to make sure that a single user in inside his own environment, (assume like he has done authentication and authorized inside his workspace since we not doing any authentication from frontend)

AI Model - For local testing Ollama models can also be used

AGENT ROUTES

Getting all the available agents

Method - GET

http://localhost:3001/api/agents

Get Agent Capabilities

METHOD -GET

Order Agent

http://localhost:3001/api/agents/order/capabilities

Billing Agent

http://localhost:3001/api/agents/billing/capabilities

Support Agent

http://localhost:3001/api/agents/support/capabilities

Checking Rate Limiting

http://localhost:3001/api/chat/messages

If more that 20 requests are send under 60000ms then Error 429 Too many Requests will come

Chat Routes

NOTE - This id is demo for now and in real production system this id can be generated after authentication so that it maps correctly to the specific user

Send New message

POST http://localhost:3001/api/chat/messages

The body which we are sending

{
  "userId": "55968d8b-a43a-48e6-bb0e-bd61ff16eb59",
  "message": "Where is my order?"
}

This id we will be getting from prisma studio

Expected Response

{
  "conversationId": "UUID",
  "response": "Order ORD-1001 is currently \"shipped\" (Tracking ID: TRACK-123)"
}

This conversation id will be used in next iterations

Checking Streaming version of response

For streaming we need to use curl

  • Here the text streams chunk by chunk
curl -N -X POST http://localhost:3001/api/chat/messages/stream \
  -H "Content-Type: application/json" \
  -H "x-user-id: demo-user" \
  -d '{
    "userId": "<use-from-prisma-studio>",
    "message": "Where is my order?"
  }'

Listing conversations for users

METHOD - GET

http://localhost:3001/api/chat/conversations?userId=<prisma-user-id>

Getting conversation history

METHOD - GET

http://localhost:3001/api/chat/conversations/YOUR_CONVERSATION_ID

Deleting conversation ID

METHOD - DELETE

http://localhost:3001/api/chat/conversations/YOUR_CONVERSATION_ID

How to run the tests

From the root navigate to apps/api

Run the following command

npm test

Below are the test cases screenshot

Backend Test cases

npx vitest

Frontend Test cases

npx vitest

Prisma Studio

Date model

INFRA and DEPLOYMENT

The frontend has been deployed in Azure Static Web App

Frontend Deployed Link

The backend has been deployed on Azure App Service

NOTE : Right now some minor issues exist with the backend

Logging

Logging Screenshot

System Design and Architecture

High Level Design - What's happening under the hood ?

Bonuses Included

  • Set up of monorepo with HONO RPC

  • Rate Limiting implementation LINK

  • Test cases included Screenshots and commands present

  • Compaction included LINK

  • INcluded Thinking in the UI

  • Deployed Live Demo Frontend NOTE : Backend was having some issues so testing locally is preferred. Backend fix will be soon done

Ongoing & Future Enhancements

  • Classifying intents based on the LLM but the problem is it over complicates the stuff and break the flow .

  • Generating queries using LLM so that query can be made from PRISMA database and the response comes in natural flow . Currently the reponse is too much straight forward.

  • The RAG based architecure for data which is not-sensitive

Motivation for assignment

  • LangGraph multi agent system

LOOM VIDEO

Loom Video

POST ASSIGNMENT ENHANCEMENT

This loom video contains the enhancements

  • the response coming from the database is natural

LOOM Video Enhancement

What fixes has been done ?

  • backend has been deployed
  • the data model is migrated to supabase due to ipv4 issues
  • getting natural response from the database
  • routing of agents done on basis of LLM
  • abstraction layer added on top of queries

Releases

No releases published

Packages

 
 
 

Contributors