This project is a proof-of-concept implementation of a Model Context Protocol (MCP) server and a custom AI-powered bookstore agent. It demonstrates how an LLM can call structured tools exposed by an MCP server (e.g., search books) and how a separate agent process can communicate with that server to provide conversational responses.
The repo contains:
-
An MCP server
- Built using
@modelcontextprotocol/sdk - Exposes a sample of bookstore-related tools (e.g., fetching books)
- Includes a seeding script to populate mock bookstore data
- Built using
-
A custom bookstore AI Agent
- Connects to the MCP server as a client via a dedicated MCP client service:
- Uses
@modelcontextprotocol/sdkwith an STDIO-based process transport to talk to the MCP server. - Discovers available MCP tools at runtime and invokes them dynamically
- Uses
- Uses OpenAI to answer user queries
- Decides when to call MCP tools to fetch inventory data
- Can respond normally or stream responses token-by-token
- Connects to the MCP server as a client via a dedicated MCP client service:
-
A lightweight Express API provides endpoints for:
- non-streaming LLM answers
- streaming LLM answers (chunked over HTTP)
- Demonstrates how a backend service can expose the agent to external clients
- Model Context Protocol (MCP) – structured tool calling using
@modelcontextprotocol/sdk - OpenAI API – LLM responses + streaming output
- Express 5 – simple HTTP layer
- TypeScript – typed agent + server
- Zod – schema validation
- Firebase Firestore - for bookstore database
-
main– Production branch. This branch represents the stable and production-ready version of the code. It is used for deployments to the live environment. -
dev– Development branch. This is the default branch for ongoing development work. It is where new features and bug fixes are implemented and tested before being merged into the main branch. It is used for deployments to the staging environment.
npm run dev:server– Run the MCP server in development mode with hot-reloadingnpm run dev:agent– Run the Bookstore Agent in development mode with hot-reloadingnpm run build– Compile the TypeScript code to JavaScriptnpm run start:server– Start the MCP server in production modenpm run start:agent– Start the Bookstore Agent in production modenpm run seed:bookstore– Seed the Firestore database with mock bookstore data
-
Download Claud Desktop
-
Locate the config file
From Claud App -> Settings > Developer -> Edit config
The path is probably:
code "$HOME/Library/Application Support/Claude/claude_desktop_config.json"- Prepare the config file
# check the absolute path to the server
❯ pwdEnsure Claude will run the correct version of Node:
which node
# should output smt like
> /Users/mistergreen/.nvm/versions/node/v22.17.0/bin/nodeUse this path as command in the server config below (instead of using simply node)
Option A – Use the built JS file
If you run npm run build and want Claude to call the compiled JS:
{
"mcpServers": {
"demo-server": {
"command": "/Users/mistergreen/.nvm/versions/node/v22.17.0/bin/node",
"args": ["<ABSOLUTE-PATH>/dist/mcp-server/index.js"],
"env": {
"LOG_LEVEL": "info"
}
}
}
}Option B – Run TypeScript directly (good for dev)
If you don’t want to build every time, Claude can launch your server through tsx:
{
"mcpServers": {
"demo-server": {
"command": "/Users/mistergreen/.nvm/versions/node/v22.17.0/bin/node",
"args": ["<ABSOLUTE-PATH>/node_modules/tsx/dist/cli.js", "<ABSOLUTE-PATH>/src/mcp-server/index.ts"],
"env": {
"LOG_LEVEL": "info"
}
}
}
}- Test
-
Save the config.json.
-
Quit and restart Claude Desktop.
-
Testing tools: Start a new chat and try:
- Use the add tool with a=2 and b=3
- Claud should discover the add tool.
- Debug
- From Claud App -> Settings > Developer -> Logs
- Start the MCP server
npm run dev:server- Start the agent API
npm run dev:agent- Send requests to the agent API
E.g. using curl:
curl -X POST http://localhost:5000/ask \
-H "Content-Type: application/json" \
-d '{"question": "Can you recommend me a sci-fi book?"}'