Complete syntax reference, formal grammar, and annotated examples for
SLANG v0.7.5 — Super Language for Agent Negotiation & Governance.
- Quick Reference Card
- Lexical Elements
- Program Structure
- Agents
- Primitives
- Conditionals
- Variables & State
- Loops
- Flow Constraints
- Deliver
- Composition
- Expressions & Data Model
- Execution Model
- Testing & Assertions
- Formal Grammar (EBNF)
- Reserved Words
flow "name" { -- basic flow
flow "name" (p: "type", ...) { -- parametric flow (p resolves as value in agents)
import "file.slang" as alias -- embed sub-flow; alias = committed agent
agent Name {
role: "description" -- optional: natural language role
model: "model-name" -- optional: LLM model to use
tools: [tool1, tool2] -- optional: available tools
retry: 3 -- optional: max retry attempts on failure
let var = value -- declare local variable
set var = value -- update local variable
stake func(args) -> @Target -- produce & send
stake func(args) -- local execution (no recipient)
let var = stake func(args) -- execute & store result in variable
output: { key: "type" } -- optional: structured output contract
await binding <- @Source -- wait for input
commit [value] [if cond] -- accept & stop
escalate @Target [reason: ""] [if cond] -- delegate upward
when expr { -- conditional block
...operations...
} else { -- optional else branch
...operations...
}
repeat until expr { -- loop until condition is true
...operations...
}
}
converge when: condition -- when does the flow end?
budget: tokens(N), rounds(N) -- hard resource limits
deliver: handler(args) -- post-convergence side effect
expect expr -- test assertion (used with slang test)
}
| Recipient | Meaning |
|---|---|
@AgentName |
Send to a specific agent |
@out |
Send to flow output (collected as final result) |
@all |
Broadcast to every other agent in the flow |
@Human |
Escalate to a human operator — halts the flow |
@any |
Accept from any single agent |
* |
Wildcard — accept from anyone (await only) |
| Expression | Type | Description |
|---|---|---|
@Agent.output |
any | Last staked output from the agent |
@Agent.committed |
boolean | Whether the agent has committed |
@Agent.status |
string | idle, running, committed, escalated, blocked |
| Expression | Type | Description |
|---|---|---|
committed_count |
number | Number of agents that have committed |
all_committed |
boolean | True when every agent has committed |
round |
number | Current round number |
tokens_used |
number | Total tokens consumed (runtime only) |
Whitespace (spaces, tabs, newlines) is insignificant except as token separator.
Comments start with -- and extend to end of line:
-- This is a full-line comment
agent Researcher { -- This is an inline comment
stake gather() -> @out
}Identifiers name agents, variables, functions, and properties. They start with a letter or underscore, followed by letters, digits, or underscores:
Researcher my_agent step2 _internal
Strings are enclosed in double quotes:
"hello world" "AI agent frameworks 2026" "SWOT"
Numbers are integers or decimals, optionally negative:
42 3.14 0.8 -1 50000
true false
Agent references are identifiers prefixed with @:
@Researcher @Human @all @any @out
A SLANG program consists of one or more flow declarations:
flow "flow-name" {
...body...
}Flows can declare typed parameters to become reusable functions:
flow "analysis" (topic: "string", depth: "number") {
agent Analyst {
stake analyze(topic, depth: depth) -> @out
commit
}
converge when: all_committed
}Parameter type annotations ("string", "number", "boolean") are advisory only. Values are injected at runtime via RuntimeOptions.params.
A flow body may contain (in any order):
importstatements — include external flowsagentdeclarations — define actorsconvergestatement — define success conditionbudgetstatement — define resource limits
Minimal valid program:
flow "hello" {
agent Greeter {
stake greet("world") -> @out
commit
}
converge when: all_committed
}An agent is a named actor with a sequence of operations:
agent Name {
...metadata...
...operations...
}agent Researcher {
role: "Expert web researcher focused on primary sources"
model: "claude-sonnet"
tools: [web_search, code_exec]
stake gather(topic: "AI") -> @Analyst
}| Meta | Syntax | Purpose |
|---|---|---|
role |
role: "string" |
Natural language description — becomes part of the agent's system prompt |
model |
model: "string" |
LLM model preference; the router adapter dispatches to the matching backend |
tools |
tools: [id, ...] |
List of tools available to this agent |
model and multi-endpoint routing: |
||
When using a router adapter, the model field determines which LLM backend handles this agent's calls. Different agents can use different providers and endpoints: |
agent Researcher {
model: "claude-sonnet" -- routed to Anthropic
stake gather(topic) -> @Analyst
}
agent Analyst {
model: "gpt-4o" -- routed to OpenAI
await data <- @Researcher
stake analyze(data) -> @out
commit
}SLANG has exactly three primitives. Everything else is syntactic sugar.
stake <function>(<args...>) [-> <recipients>] [if <cond>]
[output: { field: "type", ... }]Executes a semantic function and delivers the result to one or more recipients. The function name is a semantic label — it tells the LLM what to do, not a code reference.
The -> <recipients> part is optional. When omitted, the stake executes locally — the result is stored in the agent's output but not sent to any other agent or the flow. This is useful for intermediate computations within an agent.
The result of a local stake can be captured into a variable:
let result = stake func(args)
set result = stake func(args)This also works with recipients — the result is both stored locally and sent:
let result = stake analyze(data) -> @ReviewerThe optional output: block declares a structured output contract. The runtime injects the schema into the LLM prompt, forcing the response to include a JSON object with the specified fields. Field types can be "string", "number", or "boolean".
Examples:
-- Basic: single recipient
stake gather(topic: "AI trends") -> @Analyst
-- Local execution (no recipient)
stake research(topic: "AI safety")
-- Capture result in variable
let article = stake write(topic: "AI trends")
-- Capture and send
let result = stake analyze(data) -> @out
-- Update variable with stake result
set draft = stake revise(draft, feedback)
-- Chain local stakes
let data = stake research(topic: "AI")
let summary = stake summarize(data)
stake publish(summary) -> @out
-- Multiple recipients
stake analyze(data) -> @Critic, @Logger
-- Broadcast to all agents
stake announce(result) -> @all
-- Output to flow (final result)
stake summarize(findings) -> @out
-- Positional and named arguments
stake validate(data, against: ["margin > 20%", "growth > 5%"]) -> @Analyst
-- With condition
stake retry(analysis) -> @Critic if feedback.rejected
-- With structured output contract
stake review(draft) -> @Decider
output: { approved: "boolean", score: "number", notes: "string" }await <binding> <- <sources> [(<options>)]Blocks until the specified source(s) produce a stake directed at this agent.
Binds the received data to a variable for later use.
Examples:
-- Single source
await data <- @Researcher
-- Multiple sources (wait for all)
await data <- @Researcher, @Scraper
-- Any source
await input <- @any
-- Wildcard
await signal <- *
-- With count option (wait for N deliveries)
await results <- @Workers (count: 3)commit [<value>] [if <condition>]Declares that a result is accepted — this agent is done. A committed agent executes no further operations.
Examples:
-- Unconditional commit
commit
-- Commit with value
commit result
-- Conditional commit
commit verdict if verdict.confidence > 0.8escalate @<target> [reason: "<string>"] [if <condition>]Declares that this agent cannot resolve the task and delegates to another agent.
Escalating to @Human halts the entire flow.
Examples:
-- Escalate to another agent
escalate @Arbiter
-- With reason
escalate @Human reason: "Conflicting data, need human judgment"
-- Conditional
escalate @Human reason: "Low confidence" if verdict.confidence <= 0.5Any stake, commit, or escalate can have a trailing if condition:
commit result if result.score > 0.8
escalate @Human if result.score <= 0.8
stake retry(x) -> @Reviewer if feedback.rejectedGroups multiple operations under a condition:
when feedback.approved {
commit feedback
}
when feedback.rejected {
stake revise(draft, feedback.notes) -> @Reviewer
}when blocks are not exclusive — both can execute if both conditions are true.
Use mutually exclusive conditions (e.g. .approved / .rejected) for if/else semantics, or use else / otherwise.
A when block can have an optional else (or otherwise) branch:
when feedback.approved {
commit feedback
} else {
stake revise(draft, feedback.notes) -> @Reviewer
}otherwise is an alias for else:
when data.valid {
commit data
} otherwise {
escalate @Human reason: "invalid data"
}The else block executes when the when condition is false. Without else, a false condition simply skips the block (backward compatible with v0.5).
let name = expression
let name = stake func(args) -- execute stake & store resultDeclares a new agent-local variable. Variables are scoped to the agent that declares them and persist across rounds.
When used with stake, the LLM call is executed and the result is stored in the variable (see stake — Produce & Send).
agent Tracker {
let summary = "initial"
let attempts = 0
let ready = false
let data = stake research(topic: "AI")
stake process(data) -> @out
commit
}set name = expression
set name = stake func(args) -- execute stake & update variableUpdates the value of a previously declared variable.
let msg = "hello"
set msg = "updated"
set msg = result.text
set msg = stake rewrite(msg) -- re-generate via LLMResolution order: Variables are checked before await bindings during expression evaluation.
Prompt injection: Variables are included in the agent's LLM system prompt as Agent variables: { name: value }.
repeat until condition {
...operations...
}Executes the body repeatedly until condition evaluates to true.
agent Worker {
let done = false
repeat until done {
stake process(data) -> @Checker
await result <- @Checker
set done = result.approved
}
commit
}The runtime enforces a safety limit of 100 iterations to prevent infinite loops.
Defines when the flow terminates successfully:
converge when: committed_count >= 1
converge when: all_committed
converge when: @Analyst.committed && @Validator.committedHard limits on resource consumption. When exhausted, the flow terminates with budget_exceeded:
budget: tokens(50000)
budget: rounds(5)
budget: tokens(50000), rounds(5)
budget: time(60s)
budget: tokens(50000), rounds(5), time(120s)| Constraint | Meaning |
|---|---|
tokens(N) |
Max total tokens consumed across all LLM calls |
rounds(N) |
Max number of execution rounds |
time(Ns) |
Max wall-clock time in seconds |
If no budget is specified, the runtime defaults to rounds(10).
A deliver statement declares a handler that runs after the flow successfully converges. Multiple deliver statements are executed in declaration order.
deliver: handler_name(arg1: "value", arg2: "value")deliver: <funcCall>
Where funcCall is the same syntax as a stake function call: name(args).
flow "report" {
agent Writer {
stake write(topic: "AI Safety") -> @out
commit
}
deliver: save_file(path: "report.md", format: "markdown")
deliver: webhook(url: "https://hooks.example.com/done")
converge when: all_committed
}- Deliver handlers are provided at runtime via
RuntimeOptions.deliverers(same pattern astools) - Each handler receives (1) the last flow output and (2) the named arguments
- If a handler name has no matching entry in
deliverers, it is silently skipped - Only runs on successful convergence — not on
budget_exceeded,escalated, ordeadlock
In addition to deliver, the runtime supports an onConverge callback (set via RuntimeOptions.onConverge) that fires after all deliver handlers complete. This is a programmatic hook, not part of the SLANG syntax.
Import another .slang file and run it as an embedded sub-flow. The imported flow executes before the parent flow's main loop. Its final output is exposed as a synthetic committed agent named by the alias — any parent agent can receive the result with await:
flow "full-report" {
import "research.slang" as research -- runs sub-flow to completion
agent Editor {
await findings <- @research -- receive sub-flow output via alias
stake edit(findings, format: "markdown") -> @out
commit
}
converge when: all_committed
budget: tokens(300000), rounds(20)
}How it works:
- Requires
importLoaderinRuntimeOptions— a callback that receives the path string and returns source code - The sub-flow runs with the same adapter and tools as the parent
- The sub-flow's
@outoutputs (or committed agent outputs) become the alias agent's output - If
importLoaderis not provided or throws, the import is silently skipped - The alias is visible in
state.agentsas a committed agent
Combining import with parametric flows:
flow "pipeline" (topic: "string") {
import "gather.slang" as data
agent Writer {
await raw <- @data
stake write(raw, about: topic) -> @out -- topic from flow params
commit
}
converge when: all_committed
}| Type | Examples | Notes |
|---|---|---|
| String | "hello", "multi word" |
Double-quoted |
| Number | 42, 3.14, 0.8 |
Integer or decimal |
| Boolean | true, false |
|
| List | ["a", "b"], [web_search] |
Comma-separated, brackets |
| Identifier | result, data |
Reference to a bound variable |
| Agent ref | @Analyst, @Human |
Reference to an agent |
Access properties on values or agent state:
result.confidence -- property of a bound variable
feedback.approved -- boolean property
@Analyst.output -- agent's last staked output
@Analyst.committed -- agent's committed status| Operator | Meaning |
|---|---|
> >= < <= |
Numeric comparison |
== != |
Equality |
&& || |
Logical AND / OR |
contains |
String containment |
stake gather("AI trends") -- positional
stake gather(topic: "AI trends") -- named
stake validate(data, against: ["rule1", "rule2"]) -- mixed- Parse all agents and their operations
- Build a dependency graph from
stake -> @Targetandawait <- @Source - Agents whose first operation is
stake(no precedingawait) → ready - Agents whose first operation is
await→ blocked - Execute ready agents, collect outputs, satisfy awaits, repeat
Within each round, independent agents execute in parallel. Two agents are independent when their current operations don't have data dependencies on each other:
- All agents whose current operation is
stakerun concurrently viaPromise.all await,commit,escalate, andwhenoperations are executed sequentially (they are state-dependent)
This means a flow with three independent researchers will fire all three LLM calls simultaneously, not sequentially.
To disable parallelism (e.g. for debugging), pass parallel: false to RuntimeOptions.
The router adapter dispatches LLM calls to different backends based on the agent's model field:
const router = createRouterAdapter({
routes: [
{ pattern: "claude-*", adapter: anthropicAdapter },
{ pattern: "gpt-*", adapter: openaiAdapter },
{ pattern: "local/*", adapter: ollamaAdapter },
],
fallback: openRouterAdapter, // OpenRouter as fallback for 300+ models
});With this configuration, model: "claude-sonnet" routes to Anthropic, model: "gpt-4o" routes to OpenAI, model: "local/llama3" routes to a local Ollama instance, and unmatched models fall back to OpenRouter — all within the same flow.
Available adapters: MCP Sampling, OpenAI, Anthropic, OpenRouter, Echo, Router.
Zero-Setup Mode: An LLM reads the flow and executes it turn-by-turn in a single conversation, simulating each agent in sequence. No runtime needed.
Thin Runtime Mode: A scheduler program parses the flow, maintains state, and dispatches each agent as a separate LLM call. Supports real tools, parallel execution, and different models per agent.
A flow terminates when:
- The
convergecondition is met →converged - The
budgetis exhausted →budget_exceeded - An
escalate @Humanis reached →escalated - A deadlock is detected (no agent can proceed) →
deadlock
A round is one full pass through all currently executable agents. Within a round, independent agents run in parallel. The budget: rounds(N) constraint limits how many full passes occur.
The runtime can checkpoint the FlowState after each round, enabling crash recovery and persistence:
const state = await runFlow(source, {
adapter,
checkpoint: async (snapshot) => {
await fs.writeFile('cp.json', serializeFlowState(snapshot));
},
});To resume a previously interrupted flow:
const saved = deserializeFlowState(await fs.readFile('cp.json', 'utf8'));
const state = await runFlow(source, { adapter, resumeFrom: saved });The serializeFlowState / deserializeFlowState helpers handle Map serialization. The runtime emits checkpoint events.
When an agent declares tools: [web_search] and the runtime provides matching tool handlers, the tools become functional.
Tool handlers can be provided in two ways:
API — pass a tools record to runFlow():
const state = await runFlow(source, {
adapter,
tools: {
web_search: async (args) => {
return await fetchResults(args.query as string);
},
},
});CLI — pass a JS/TS file with --tools:
slang run research.slang --adapter openrouter --tools tools.jsThe file must default-export an object { name: handler }. See examples/tools.js for a template.
During a stake operation, the LLM can invoke tools by including TOOL_CALL: tool_name({"arg": "value"}) in its response. The runtime:
- Detects the tool call pattern
- Executes the matching handler
- Appends the result to the conversation
- Re-calls the LLM with the tool result
- Repeats until no more tool calls (max 10 per stake)
Only tools declared in the agent's tools: metadata and provided in the runtime options are available. The runtime emits tool_call and tool_result events.
The expect statement declares an assertion that is evaluated after flow execution. It is a flow-level item (sibling of agent, converge, budget).
expect @Agent.output contains "expected text"
expect @Agent.committed == true
expect @Agent.status == "committed"The contains keyword is a binary operator that tests whether the left operand (converted to string) includes the right operand:
expect @Writer.output contains "conclusion"contains can also be used in when blocks:
when @Reviewer.output contains "approved" {
commit
}Flows with expect statements are run using slang test:
slang test my-flow.slang
slang test my-flow.slang --mock "Agent1:response1,Agent2:response2"A mock adapter is used by default during testing to provide deterministic, canned responses per agent. This enables reproducible assertions without calling a real LLM.
flow "greeting-test" {
agent Greeter {
stake greet("world") -> @out
commit
}
expect @Greeter.committed == true
expect @Greeter.output contains "hello"
converge when: all_committed
budget: rounds(1)
}(* Whitespace and comments *)
WHITESPACE = { " " | "\t" | "\r" | "\n" } ;
COMMENT = "--" { ANY_CHAR - "\n" } "\n" ;
(* Identifiers and literals *)
IDENT = LETTER { LETTER | DIGIT | "_" } ;
STRING = '"' { ANY_CHAR - '"' } '"' ;
NUMBER = [ "-" ] DIGIT { DIGIT } [ "." DIGIT { DIGIT } ] ;
BOOLEAN = "true" | "false" ;
AGENT_REF = "@" ( IDENT | "all" | "any" | "out" | "Human" ) ;
LETTER = "a"-"z" | "A"-"Z" | "_" ;
DIGIT = "0"-"9" ;(* Top-level *)
program = { flow_decl } ;
flow_decl = "flow" STRING [ flow_params ] "{" flow_body "}" ;
(* Parametric flow parameters *)
flow_params = "(" flow_param { "," flow_param } ")" ;
flow_param = IDENT ":" STRING ;
flow_body = { import_stmt | agent_decl | converge_stmt | budget_stmt | deliver_stmt | expect_stmt } ;
(* Import: sub-flow composition *)
import_stmt = "import" STRING "as" IDENT ;
(* Deliver *)
deliver_stmt = "deliver" ":" func_call ;
(* Testing *)
expect_stmt = "expect" expression ;
(* Agent *)
agent_decl = "agent" IDENT "{" agent_body "}" ;
agent_body = { agent_meta | operation } ;
agent_meta = role_decl | model_decl | tools_decl | retry_decl ;
role_decl = "role" ":" STRING ;
model_decl = "model" ":" STRING ;
tools_decl = "tools" ":" list_literal ;
retry_decl = "retry" ":" NUMBER ;
(* Operations *)
operation = stake_op | await_op | commit_op | escalate_op | when_block
| let_op | set_op | repeat_block ;
stake_op = [ ( "let" | "set" ) IDENT "=" ] "stake" func_call [ "->" recipient_list ] [ condition ] [ output_schema ] ;
output_schema = "output" ":" "{" output_field { "," output_field } "}" ;
output_field = IDENT ":" STRING ;
await_op = "await" IDENT "<-" source_list [ "(" await_opts ")" ] ;
commit_op = "commit" [ expression ] [ condition ] ;
escalate_op = "escalate" AGENT_REF [ "reason" ":" STRING ] [ condition ] ;
when_block = "when" expression "{" { operation } "}" [ else_block ] ;
else_block = ( "else" | "otherwise" ) "{" { operation } "}" ;
let_op = "let" IDENT "=" expression ; (* see also stake_op for let = stake *)
set_op = "set" IDENT "=" expression ; (* see also stake_op for set = stake *)
repeat_block = "repeat" "until" expression "{" { operation } "}" ;
(* Function calls *)
func_call = IDENT "(" [ arg_list ] ")" ;
arg_list = argument { "," argument } ;
argument = [ IDENT ":" ] expression ;
(* Recipients and sources *)
recipient_list = recipient { "," recipient } ;
recipient = AGENT_REF ;
source_list = source { "," source } ;
source = AGENT_REF | "*" ;
await_opts = await_opt { "," await_opt } ;
await_opt = IDENT ":" expression ;
(* Conditions *)
condition = "if" expression ;
(* Flow constraints *)
converge_stmt = "converge" "when" ":" expression ;
budget_stmt = "budget" ":" budget_item { "," budget_item } ;
budget_item = ( "tokens" | "rounds" | "time" ) "(" expression ")" ;
(* Expressions *)
expression = comparison ;
comparison = containment [ comp_op containment ] ;
comp_op = ">" | ">=" | "<" | "<=" | "==" | "!=" | "&&" | "||" ;
containment = access [ "contains" access ] ;
access = primary { "." IDENT } ;
primary = NUMBER
| STRING
| BOOLEAN
| IDENT
| AGENT_REF
| list_literal
| "(" expression ")"
;
list_literal = "[" [ expression { "," expression } ] "]" ;flow, agent, stake, await, commit, escalate, import, as,
when, if, else, otherwise, converge, budget, role, model, tools,
tokens, rounds, time, count, reason, retry, output, deliver,
let, set, repeat, until, expect, contains,
true, false,
@out, @all, @any, @Human