First AI-Native Performance Testing MCP Server — Integrate k6 load testing with Claude, ChatGPT, and other AI assistants
Transform natural language into production-ready performance tests. The Grafana k6 Performance MCP Server is the first AI-native Model Context Protocol (MCP) server for Grafana k6 load testing. Built for the AI era, it enables developers, QA engineers, and DevOps teams to create, execute, and analyze performance tests through conversational AI interfaces.
- AI-First Design: Natural language → k6 test scripts in seconds
- MCP Native: Seamless integration with Claude Desktop, Cline, and other MCP clients
- Production-Ready: Comprehensive test templates for REST, GraphQL, WebSocket, and gRPC
- Docker Ready: Full containerization with monitoring stack (Grafana, InfluxDB, Prometheus)
- CI/CD Native: GitHub Actions workflows included for automated testing
- Advanced Prompts: Pre-built conversational workflows for common testing scenarios
- Extensible: Modular AI skills, agents, and chat modes
- Multi-Protocol: REST, GraphQL, WebSocket, gRPC support out-of-the-box
├── AI/ # Modular AI components (agents, chatmodes, skills, MCP resources)
│ ├── agent/
│ ├── chatmodes/
│ ├── skills/
│ └── MCP/
├── examples/ # k6 test scripts and templates (api, load, ramping, spike, etc.)
├── src/ # Main server source code
├── install.sh # One-click install script (Linux/macOS)
├── install.ps1 # One-click install script (Windows)
├── package.json # Project metadata and dependencies
├── tsconfig.json # TypeScript configuration
├── README.md # Project documentation
├── LICENSE # License file
└── ... # Other configs, docs, and assets
For most users, just run the provided script for your OS:
- Linux/macOS:
bash install.sh
- Windows:
./install.ps1
This will install dependencies, build the project, and install k6 if needed.
Run with Docker for isolated, reproducible environments:
# Build and run
docker-compose up -d
# Run with monitoring stack (Grafana + InfluxDB + Prometheus)
docker-compose --profile monitoring up -d
# View logs
docker-compose logs -f k6-mcp-server
# Stop services
docker-compose downAccess monitoring dashboards:
- Grafana: http://localhost:3000 (admin/admin)
- InfluxDB: http://localhost:8086
- Prometheus: http://localhost:9090
Manual steps:
- Install dependencies:
npm install npm run build
- Install k6:
- Run the server:
node build/index.js
- Create and run your first test: Use the provided tools or see tests/ for ready-to-use scripts.
The project provides production-ready k6 test scripts for modern API architectures:
-
API Test: tests/api/api-test.js
- Multi-method testing (GET, POST, PUT, DELETE)
- Request validation and threshold checks
- Authentication handling
-
Load Test: tests/load/basic-load-test.js
- Baseline performance measurement
- Steady-state load simulation
-
Ramping Test: tests/ramping/ramping-vus-test.js
- Gradual load increase/decrease
- Scaling behavior analysis
-
Spike Test: tests/spike/spike-test.js
- Sudden traffic surge testing
- System resilience validation
-
GraphQL Test: tests/graphql/graphql-test.js
- Query and mutation testing
- Variable handling and fragments
- Error scenario validation
-
WebSocket Test: tests/websocket/websocket-test.js
- Connection lifecycle testing
- Real-time message latency
- Chat/streaming simulation
-
gRPC Test: tests/grpc/grpc-test.js
- Unary and streaming RPC
- Protocol buffer handling
- Performance comparison with REST
See individual test files for detailed usage instructions and best practices.
The AI/ directory contains modular components for building intelligent agents, chat modes, and skills that can be integrated with your MCP server or other Node.js projects.
Structure:
AI/agent/: Example agent logic and orchestration scriptsAI/chatmodes/: Chat mode configurations and conversational logicAI/skills/: Reusable skill modules (e.g., HTTP requests)AI/MCP/: Model Context Protocol resource templates and integration examples
How to Use:
-
Import a skill or chat mode in your agent:
// Import a skill and a chat mode import { getRequest } from "./AI/skills/http-skill.js"; import chatMode from "./AI/chatmodes/simple-chatmode.js"; // Use in your agent logic export default function agent(context) { if (context.input.startsWith("fetch")) { const url = context.input.split(" ")[1]; const res = getRequest(url); return `Fetched ${url}: Status ${res.status}`; } return chatMode(context); }
-
Customize or extend:
- Add new skills to
AI/skills/(e.g., math, database, etc.) - Create new chat modes in
AI/chatmodes/ - Build more advanced agents in
AI/agent/
- Add new skills to
-
Integrate with your MCP server or other Node.js apps by importing and composing these modules as needed.
The agent modules in AI/agent/ are lightweight helpers that take a context object and return plain-text guidance. They are useful when you want to classify a user request before deciding which MCP tool to call.
What each example agent does:
simple-agent.js: basic skill and chat mode composition exampletest-generation-agent.js: detects protocol, test type, and the closest starter script intests/protocol-advisor-agent.js: recommends the best protocol-specific example for REST, GraphQL, gRPC, or WebSocket testingresult-analysis-agent.js: interprets pasted k6 metrics and returns a short analysis with next actionsthreshold-advisor-agent.js: suggests p95 latency, error rate, and throughput thresholds calibrated to the detected test typescenario-builder-agent.js: detects user journey steps (login, browse, checkout, etc.) and returns a k6group()-based scenario skeletonci-cd-agent.js: recommends CI/CD pipeline integration steps for GitHub Actions, GitLab CI, Jenkins, Azure DevOps, or CircleCI
Typical usage flow:
- Pass a natural-language request into an agent.
- Use the returned guidance to choose a starter script or MCP tool.
- Call MCP tools such as
generate_load_test,create_k6_test, orrun_k6_test.
import testGenerationAgent from "./AI/agent/test-generation-agent.js";
const guidance = testGenerationAgent({
input: "Create a spike test for https://api.example.com/orders",
});
console.log(guidance);Example follow-up mapping:
- use
test-generation-agent.jsbefore creating a new script - use
protocol-advisor-agent.jswhen choosing between REST, GraphQL, gRPC, and WebSocket examples - use
result-analysis-agent.jsafter a run to summarizep95, error rate, and throughput - use
threshold-advisor-agent.jsto generate aoptions.thresholdsblock before the first run - use
scenario-builder-agent.jswhen the request describes a multi-step user journey - use
ci-cd-agent.jsto get a copy-paste pipeline snippet for your CI/CD platform
These agents are examples only. They are documented templates you can import into your own Node.js orchestration flow; they are not auto-registered as MCP tools by default.
See the AI/README.md and subfolder READMEs for more details and templates.
The AI/MCP/prompts.md file contains pre-built conversational workflows:
- create_api_load_test: Guided API test creation with best practices
- analyze_performance_results: AI-powered result analysis and recommendations
- setup_spike_test: Black Friday / traffic surge test configuration
- optimize_existing_test: Automatic test script improvements
- setup_ci_cd_integration: Generate CI/CD pipeline configurations
- compare_test_runs: Trend analysis across multiple test runs
- generate_realistic_scenarios: User journey and persona simulation
- debug_failed_test: Intelligent troubleshooting assistance
- capacity_planning: Determine scaling requirements
These prompts enable AI assistants to provide structured, expert guidance for complex testing scenarios.
- Create k6 Tests: Generate custom, reusable k6 performance test scripts for any API or web service.
- Run Tests: Execute k6 load tests with configurable parameters (virtual users, duration, iterations) for flexible benchmarking.
- View Results: Instantly access detailed test execution results and performance metrics.
- List Tests: Organize and manage all available k6 test scripts in one place.
- Generate Load Tests: Quickly generate common load test patterns for rapid prototyping.
- Resource Management: Access test scripts and results as MCP resources for easy integration.
- Node.js 18+
- k6 (must be installed and available in your system PATH)
macOS:
brew install k6Linux:
sudo gpg -k
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6Windows:
choco install k6Or download from k6 releases.
npm install
npm run buildYou can customize storage locations using environment variables:
K6_TESTS_DIR: Directory for storing k6 test scripts (default:./k6-tests)K6_RESULTS_DIR: Directory for storing test results (default:./k6-results)
Add to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"grafana-k6-performance": {
"command": "node",
"args": [
"/absolute/path/to/Grafana-k6-performance-MCP-Server/build/index.js"
],
"env": {
"K6_TESTS_DIR": "/path/to/k6-tests",
"K6_RESULTS_DIR": "/path/to/k6-results"
}
}
}
}Run the server using stdio transport:
node build/index.jsCreate a new k6 performance test script.
Parameters:
name(string, required): Name of the test file (without .js extension)script(string, required): The k6 test script content
Example:
{
"name": "api-test",
"script": "import http from 'k6/http';\nexport default function() {\n http.get('https://api.example.com');\n}"
}Run a k6 performance test.
Parameters:
testFile(string, required): Name of the test file to run (e.g., "test.js")vus(number, optional): Number of virtual users (default: 10)duration(string, optional): Test duration (e.g., "30s", "5m") (default: "30s")iterations(number, optional): Number of iterations per VU (overrides duration)
Example:
{
"testFile": "api-test.js",
"vus": 50,
"duration": "2m"
}List all available k6 test scripts.
Parameters: None
Get results from previous k6 test runs.
Parameters:
testName(string, optional): Name of the test to get results for (returns all if not specified)
Example:
{
"testName": "api-test.js"
}Generate a k6 load test script with common patterns.
Parameters:
name(string, required): Name of the testurl(string, required): Target URL to testmethod(string, optional): HTTP method (GET, POST, PUT, DELETE) (default: GET)vus(number, optional): Number of virtual users (default: 10)duration(string, optional): Test duration (default: "30s")
Example:
{
"name": "quick-load-test",
"url": "https://api.example.com/endpoint",
"method": "POST",
"vus": 100,
"duration": "5m"
}The server exposes k6 test scripts and results as MCP resources for easy programmatic access:
- Test Scripts:
k6://tests/{filename}— Access k6 test script content - Test Results:
k6://results/{filename}— Access test execution results
import http from "k6/http";
import { check, sleep } from "k6";
export const options = {
vus: 10,
duration: "30s",
thresholds: {
http_req_duration: ["p(95)<500"],
http_req_failed: ["rate<0.1"],
},
};
export default function () {
const response = http.get("https://test.k6.io");
check(response, {
"status is 200": (r) => r.status === 200,
"response time < 500ms": (r) => r.timings.duration < 500,
});
sleep(1);
}npm run buildnpm run watch- API Performance Testing: Test REST APIs under various load conditions
- Load Testing: Simulate multiple concurrent users
- Stress Testing: Find breaking points of your application
- Spike Testing: Test how your system handles sudden traffic spikes
- Endurance Testing: Verify system stability over extended periods
- Performance Regression Testing: Ensure new changes don't degrade performance
Ensure k6 is installed and available in your PATH:
k6 versionEnsure the k6 tests and results directories are writable:
chmod -R 755 k6-tests k6-resultsThis project is licensed under the MIT License.
Contributions are welcome! Please feel free to submit a Pull Request or open an issue for feature requests and bug reports.
We are committed to providing a welcoming and inclusive environment. Please adhere to our Code of Conduct in all interactions.
Zero tolerance for:
- Harassment or discriminatory language
- Trolling or insulting comments
- Spam or off-topic discussions
All contributors will be:
✅ Listed in CONTRIBUTORS.md (coming soon) ✅ Mentioned in release notes for significant contributions
New to open source? No problem! Look for issues tagged with good-first-issue or help-wanted. We provide mentorship and guidance to help you succeed.
Thank you for making test automation better for everyone! 🚀 ✅ Given credit in documentation where applicable
If you have any questions:
💬 Open a GitHub Discussion
🐛 Report bugs via GitHub Issues
📧 Email: padmaraj.nidagundi at gmail.com
Response time: Typically 24-48 hours