This document provides detailed documentation for the 5 MCP prompts in the Parseable MCP server.
MCP Prompts are pre-built, multi-step workflows that guide AI agents through common tasks. Instead of calling individual tools manually, prompts provide a structured approach to accomplishing complex operations.
When an AI agent receives a prompt, it gets detailed instructions on:
- Which tools to call
- In what order
- What to analyze
- What to report back
When using prompt, we can use natural language to ask questions related to our data:
- "Check the health of the otellogs stream"
- "Find errors in network_logstream from yesterday"
- "Compare otellogs and monitor_logstream"
- "Investigate the severity_text field"
- "Find anomalies in the last 24 hours"
Agents can use the following pre-built prompts to guide their analysis:
Purpose: Find and analyze error logs in a data stream.
When to use:
- Investigating system errors or failures
- Understanding error patterns over time
- Troubleshooting production issues
Required Arguments:
streamName- Name of the data stream to analyzestartTime- Start time in ISO 8601 format (e.g., "2026-02-13T00:00:00Z")endTime- End time in ISO 8601 format
Optional Arguments:
errorField- Specific field to check for errors (default: "body")
What it does:
- Gets the stream schema to understand available fields
- Queries for errors using pattern matching (error, exception, failed)
- Analyzes error distribution over time
- Identifies common error patterns and messages
- Provides recommendations for investigation
Usage:
"Find errors in the otellogs stream from yesterday"
"Analyze errors in network_logstream from the last 24 hours"
Purpose: Perform a comprehensive health assessment of a data stream.
When to use:
- Regular stream monitoring
- Checking if data ingestion is working
- Verifying stream configuration
- Identifying potential issues
Required Arguments:
streamName- Name of the data stream to check
What it does:
- Gets stream information (creation time, event timestamps)
- Gets stream statistics (ingestion rates, storage)
- Gets stream schema
- Analyzes:
- Stream activity status (active/inactive)
- Data freshness (time since last event)
- Ingestion rates (events per day/hour)
- Storage efficiency
- Reports warnings or concerns
- Provides optimization recommendations
Usage:
"Check the health of the otellogs stream"
"Is the network_logstream receiving data?"
Purpose: Deep dive into a specific field's values and patterns.
When to use:
- Understanding field distributions
- Data quality analysis
- Finding most common values
- Checking for nulls or empty values
Required Arguments:
streamName- Name of the data streamfieldName- Name of the field to investigatestartTime- Start time in ISO 8601 formatendTime- End time in ISO 8601 format
What it does:
- Verifies the field exists in the schema
- Counts total occurrences and unique values
- Gets top 20 most common values with counts
- Checks for null/empty values
- Calculates percentages
- Provides data quality assessment
- Recommends filters or queries
Usage:
"Investigate the severity_text field in otellogs"
"Show me the distribution of the status field"
"What are the most common values for service_name?"
Purpose: Compare metrics across multiple data streams.
When to use:
- Comparing production vs staging streams
- Understanding which streams are most active
- Capacity planning
- Identifying underutilized streams
Required Arguments:
stream1- First data stream namestream2- Second data stream name
Optional Arguments:
stream3- Optional third data stream name
What it does:
- Gets info and stats for each stream
- Compares:
- Total events (lifetime count)
- Storage usage
- Ingestion rates (events per day)
- Data freshness (last event time)
- Activity levels
- Creates comparison table
- Identifies which stream is most active
- Highlights concerning trends
- Provides recommendations
Usage:
"Compare otellogs and network_logstream"
"Which stream is more active: prod-logs or staging-logs?"
"Compare the top 3 streams by size"
Purpose: Detect unusual patterns, spikes, or drops in event volumes.
When to use:
- Detecting system anomalies
- Finding traffic spikes
- Identifying data ingestion issues
- Monitoring for unusual activity
Required Arguments:
streamName- Name of the data stream to analyzestartTime- Start time in ISO 8601 formatendTime- End time in ISO 8601 format
Optional Arguments:
groupBy- Time grouping: "hour" or "day" (default: "hour")
What it does:
- Queries event counts are grouped by time (hour or day)
- Calculates average event count (baseline)
- Identifies spikes (significantly higher counts)
- Identifies drops (significantly lower counts or gaps)
- Investigates anomalous periods
- Provides timeline with highlighted anomalies
- Suggests potential causes
- Recommends follow-up actions
Usage:
"Find anomalies in otellogs from the last week"
"Are there any spikes in network_logstream today?"
"Detect unusual patterns in the last 24 hours"
When using MCP-compatible clients, you can reference prompts using natural language:
Health Checks:
- "Check the health of otellogs"
- "Is network_logstream working?"
- "Run a health check on all my streams"
Error Analysis:
- "Find errors in the last hour"
- "Analyze errors in otellogs from yesterday"
- "Show me all failed requests"
Field Investigation:
- "What are the common values for severity_text?"
- "Investigate the user_id field"
- "Show me the distribution of status codes"
Stream Comparison:
- "Compare otellogs and network_logstream"
- "Which stream has more data?"
- "Compare all production streams"
Anomaly Detection:
- "Find any unusual activity"
- "Are there spikes in the last 24 hours?"
- "Detect anomalies this week"
To test prompts, see TESTING.md for the complete testing guide using the stdio test script.
- Start with shorter time ranges (1-24 hours) for faster results
- Expand to longer ranges if needed
- Use ISO 8601 format:
2026-02-14T00:00:00Z
- Use
get_data_stream_schemafirst to see available fields - Field names are case-sensitive and fields with special characters like
.or-must be quoted with backticks. - Common fields:
body,severity_text,p_timestamp
- Use exact stream names (case-sensitive)
- List streams with
get_data_streamstool if unsure - Stream names in prompts must match exactly
- analyze-errors: Limit to 100-1000 results
- investigate-field: Works best with shorter time ranges
- find-anomalies: Use
groupBy=dayfor long time ranges - compare-streams: Compare 2-3 streams at a time
"Stream not found"
- Verify stream name with
get_data_streamstool - Check spelling and case sensitivity
"No data returned"
- Verify time range contains data
- Check stream has events in that period
- Use
get_data_stream_infoto see first/last event times
"Field not found"
- Use
get_data_stream_schemato see available fields - Check field name spelling and case
Slow responses
- Reduce time range
- Use hourly grouping instead of minute-level queries
- For anomalies, use
groupBy=dayfor long ranges
- Review TESTING.md to test prompts
- Configure Claude Desktop or another MCP client
- Try prompts with natural language
- Customize prompts in
prompts/prompts.gofor your needs - Create new prompts for your specific use cases
Prompts are defined in prompts/prompts.go. You can:
- Modify SQL queries in existing prompts
- Change default values (e.g., error patterns, field names)
- Add new arguments
- Create entirely new prompts
Each prompt is a function that returns instructions for the AI agent. The agent then uses the MCP tools to execute those instructions.