SLANG ships as both a CLI tool and an MCP server, so it plugs into every major AI environment without friction. No code needed to get started — write your workflow, your LLM runs it.
npm install -g @riktar/slangThis installs two binaries:
| Binary | Purpose |
|---|---|
slang |
CLI: run, parse, check, prompt |
slang-mcp |
MCP server over stdio |
All adapters can be configured through environment variables instead of passing flags each time:
| Variable | Description |
|---|---|
SLANG_ADAPTER |
sampling (default) | openai | anthropic | echo. When running inside an MCP host (Claude Code, Claude Desktop) the default sampling delegates LLM calls to the host — no API key required. |
SLANG_API_KEY |
API key for openai/anthropic adapters (falls back to OPENAI_API_KEY / ANTHROPIC_API_KEY). Not required for sampling. |
SLANG_MODEL |
Model override (e.g. gpt-4o, claude-opus-4-5) |
SLANG_BASE_URL |
Custom base URL for OpenAI-compatible endpoints |
Add SLANG as an MCP server so Claude Code can run, parse, and check .slang files directly.
claude mcp add slang -- npx --package @riktar/slang slang-mcpNo API key needed — SLANG defaults to the sampling adapter, which delegates LLM calls back to Claude through the MCP protocol using your existing Claude subscription.
If you prefer to use a separate API key (e.g. to charge to a different account):
claude mcp add slang -- env SLANG_ADAPTER=anthropic SLANG_API_KEY=sk-ant-... npx --package @riktar/slang slang-mcpVerify it's registered:
claude mcp listClaude Code will now have four tools: run_flow, parse_flow, check_flow, get_zero_setup_prompt.
Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or
%APPDATA%\Claude\claude_desktop_config.json (Windows):
Zero-key setup (recommended) — uses your Claude subscription:
{
"mcpServers": {
"slang": {
"command": "slang-mcp"
}
}
}SLANG defaults to the sampling adapter, which delegates all LLM calls back to Claude Desktop through the MCP protocol. No API key needed.
With a separate Anthropic key:
{
"mcpServers": {
"slang": {
"command": "slang-mcp",
"env": {
"SLANG_ADAPTER": "anthropic",
"SLANG_API_KEY": "sk-ant-YOUR_KEY_HERE"
}
}
}
}With an OpenAI key:
{
"mcpServers": {
"slang": {
"command": "slang-mcp",
"env": {
"SLANG_ADAPTER": "openai",
"SLANG_API_KEY": "sk-YOUR_OPENAI_KEY"
}
}
}
}OpenAI Desktop supports MCP servers through its settings:
- Open ChatGPT → Settings → Connected apps → Add MCP server
- Fill in:
- Name:
slang - Command:
slang-mcp
- Name:
- Click Save and reload the app.
SLANG will use the sampling adapter by default, which asks the ChatGPT host to run the LLM calls. If you want to force a specific model or use your own key, add environment variables:
SLANG_ADAPTER=openai
SLANG_API_KEY=sk-YOUR_KEY
Any host that accepts the standard MCP JSON config block. Minimal config — no API key needed thanks to the sampling default:
{
"mcpServers": {
"slang": {
"command": "slang-mcp"
}
}
}With an explicit adapter and key:
{
"mcpServers": {
"slang": {
"command": "slang-mcp",
"args": [],
"env": {
"SLANG_ADAPTER": "openai",
"SLANG_API_KEY": "sk-..."
}
}
}
}For hosts that require an absolute path (e.g. some Docker setups):
{
"command": "node",
"args": ["/usr/local/lib/node_modules/@riktar/slang/dist/mcp.js"]
}Point SLANG at a local OpenAI-compatible endpoint:
{
"mcpServers": {
"slang": {
"command": "slang-mcp",
"env": {
"SLANG_ADAPTER": "openai",
"SLANG_API_KEY": "ollama",
"SLANG_BASE_URL": "http://localhost:11434/v1",
"SLANG_MODEL": "llama3.2"
}
}
}
}No install required. Get the interpreter prompt and paste it into any LLM's system prompt:
slang promptOr via MCP:
Tool: get_zero_setup_prompt
Then paste the output as the system prompt in ChatGPT, Claude.ai, Gemini, etc. — the LLM becomes a SLANG interpreter and can execute any .slang flow pasted in the chat.
| Tool | Description |
|---|---|
run_flow |
Execute a SLANG flow; returns final state, agent outputs, status |
parse_flow |
Parse source to AST JSON; validates syntax |
check_flow |
Dependency graph analysis + deadlock detection |
get_zero_setup_prompt |
Returns the zero-setup system prompt for paste-into-LLM use |
slang run <file.slang> # execute a flow
slang parse <file.slang> # dump AST as JSON
slang check <file.slang> # dependency + deadlock report
slang prompt # print zero-setup system prompt
# Adapter flags
--adapter openai|anthropic|echo # (MCP mode defaults to 'sampling')
--api-key sk-... # not required with MCP sampling
--model gpt-4o
--base-url http://localhost:11434/v1