Skip to content

Commit 3a5b346

Browse files
RafaelPoclaude
andauthored
Add info-level logging to all MCP tools (#250)
* Add info-level logging to all MCP tools and upload proxy - Log entry params for browse_lists, use_list, progress, results, list_sessions, balance, and cancel (previously only logged on error) - Log success outcomes (result counts, artifact_ids, balance amounts) - Log upload_id + filename on presigned URL request - Log upload_id + size on proxy start, artifact_id on proxy completion - Log proxy error responses with status and body Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Use debug-level logging for progress polling, info only on terminal Avoids log noise from the tight polling loop (~every 3s). Only logs at INFO when the task reaches a terminal state (completed/failed/revoked). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Log first and last progress poll, skip intermediate calls Uses a module-level set to track which task_ids are being polled. Logs at INFO on the first call ("polling started") and when the task reaches a terminal state. Cleans up the set on terminal so re-polling after a retry still logs correctly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Revert in-memory progress set, use debug+terminal INFO instead The in-memory set doesn't work with multiple replicas — different pods don't share state. Instead: log every poll at DEBUG level (invisible at default INFO), log at INFO only on terminal state. Task submission already logs when polling starts implicitly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix tuple unpacking bug in everyrow_use_list _fetch_task_result returns (df, session_id, artifact_id) but the call site only unpacked 2 values, causing ValueError at runtime every time everyrow_use_list was called. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add logging to everyrow_list_session_tasks New tool added on main was missing info-level logging. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 4e4ac8b commit 3a5b346

2 files changed

Lines changed: 72 additions & 3 deletions

File tree

everyrow-mcp/src/everyrow_mcp/tools.py

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,11 @@ async def everyrow_browse_lists(
137137
Call with no parameters to see all available lists, or use search/category
138138
to narrow results.
139139
"""
140+
logger.info(
141+
"everyrow_browse_lists: search=%s category=%s",
142+
params.search,
143+
params.category,
144+
)
140145
client = _get_client(ctx)
141146

142147
try:
@@ -156,6 +161,7 @@ async def everyrow_browse_lists(
156161
)
157162
]
158163

164+
logger.info("everyrow_browse_lists: found %d list(s)", len(results))
159165
lines = [f"Found {len(results)} built-in list(s):\n"]
160166
for i, item in enumerate(results, 1):
161167
fields_str = ", ".join(item.fields) if item.fields else "(no fields listed)"
@@ -193,6 +199,7 @@ async def everyrow_use_list(
193199
194200
The copy is a fast database operation (<1s) — no polling needed.
195201
"""
202+
logger.info("everyrow_use_list: artifact_id=%s", params.artifact_id)
196203
client = _get_client(ctx)
197204

198205
try:
@@ -204,13 +211,18 @@ async def everyrow_use_list(
204211
)
205212

206213
# Fetch the copied data and save as CSV
207-
df, _ = await _fetch_task_result(client, str(result.task_id))
214+
df, _, _ = await _fetch_task_result(client, str(result.task_id))
208215

209216
csv_path = Path.cwd() / f"built-in-list-{result.artifact_id}.csv"
210217
df.to_csv(csv_path, index=False)
211218
except Exception as e:
212219
return [TextContent(type="text", text=f"Error importing built-in list: {e!r}")]
213220

221+
logger.info(
222+
"everyrow_use_list: imported artifact_id=%s rows=%d",
223+
result.artifact_id,
224+
len(df),
225+
)
214226
return [
215227
TextContent(
216228
type="text",
@@ -993,6 +1005,7 @@ async def everyrow_progress(
9931005
unless the task is completed or failed. The tool handles pacing internally.
9941006
Do not add commentary between progress calls, just call again immediately.
9951007
"""
1008+
logger.debug("everyrow_progress: task_id=%s", params.task_id)
9961009
client = _get_client(ctx)
9971010
task_id = params.task_id
9981011

@@ -1033,6 +1046,9 @@ async def everyrow_progress(
10331046
ts = TaskState(status_response)
10341047
ts.write_file(task_id)
10351048

1049+
if ts.is_terminal:
1050+
logger.info("everyrow_progress: task_id=%s status=%s", task_id, ts.status.value)
1051+
10361052
return [TextContent(type="text", text=ts.progress_message(task_id))]
10371053

10381054

@@ -1044,6 +1060,7 @@ async def everyrow_results_stdio(
10441060
Only call this after everyrow_progress reports status 'completed'.
10451061
Pass output_path (ending in .csv) to save results as a local CSV file.
10461062
"""
1063+
logger.info("everyrow_results (stdio): task_id=%s", params.task_id)
10471064
client = _get_client(ctx)
10481065
task_id = params.task_id
10491066

@@ -1092,6 +1109,12 @@ async def everyrow_results_http(
10921109
controls how many rows _you_ can read.
10931110
After results load, tell the user how many rows you can see vs the total.
10941111
"""
1112+
logger.info(
1113+
"everyrow_results (http): task_id=%s offset=%s page_size=%s",
1114+
params.task_id,
1115+
params.offset,
1116+
params.page_size,
1117+
)
10951118
client = _get_client(ctx)
10961119
task_id = params.task_id
10971120
mcp_server_url = ctx.request_context.lifespan_context.mcp_server_url
@@ -1186,6 +1209,11 @@ async def everyrow_list_sessions(
11861209
Use this to find past sessions or check what's been run.
11871210
Results are paginated — 25 sessions per page by default.
11881211
"""
1212+
logger.info(
1213+
"everyrow_list_sessions: offset=%s limit=%s",
1214+
params.offset,
1215+
params.limit,
1216+
)
11891217
log_client_info(ctx, "everyrow_list_sessions")
11901218
client = _get_client(ctx)
11911219

@@ -1251,6 +1279,7 @@ async def everyrow_balance(ctx: EveryRowContext) -> list[TextContent]:
12511279
Returns the account balance in dollars. Use this to verify available
12521280
credits before submitting tasks.
12531281
"""
1282+
logger.info("everyrow_balance: called")
12541283
client = _get_client(ctx)
12551284

12561285
try:
@@ -1266,6 +1295,7 @@ async def everyrow_balance(ctx: EveryRowContext) -> list[TextContent]:
12661295
)
12671296
]
12681297

1298+
logger.info("everyrow_balance: $%.2f", response.current_balance_dollars)
12691299
return [
12701300
TextContent(
12711301
type="text",
@@ -1293,6 +1323,7 @@ async def everyrow_list_session_tasks(
12931323
Use this to find task IDs for a session so you can display previous results
12941324
with mcp__display__show_task(task_id, label).
12951325
"""
1326+
logger.info("everyrow_list_session_tasks: session_id=%s", params.session_id)
12961327
client = _get_client(ctx)
12971328

12981329
try:
@@ -1341,6 +1372,7 @@ async def everyrow_cancel(
13411372
params: CancelInput, ctx: EveryRowContext
13421373
) -> list[TextContent]:
13431374
"""Cancel a running everyrow task. Use when the user wants to stop a task that is currently processing."""
1375+
logger.info("everyrow_cancel: task_id=%s", params.task_id)
13441376
log_client_info(ctx, "everyrow_cancel")
13451377
client = _get_client(ctx)
13461378
task_id = params.task_id

everyrow-mcp/src/everyrow_mcp/uploads.py

Lines changed: 39 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,12 +119,13 @@ async def request_upload_url(
119119

120120
try:
121121
engine_upload_url = data["upload_url"]
122+
upload_id = data["upload_id"]
122123
# Rewrite the URL to point at the MCP server instead of the Engine.
123124
# The Claude.ai sandbox can reach the MCP server but not api.everyrow.ai.
124125
upload_url = _rewrite_upload_url(engine_upload_url, mcp_server_url)
125126
result = {
126127
"upload_url": upload_url,
127-
"upload_id": data["upload_id"],
128+
"upload_id": upload_id,
128129
"expires_in": data["expires_in"],
129130
"max_size_bytes": data["max_size_bytes"],
130131
"curl_command": f'curl -X PUT -H "Content-Type: text/csv" -T {shlex.quote(params.filename)} {shlex.quote(upload_url)}',
@@ -138,6 +139,12 @@ async def request_upload_url(
138139
)
139140
]
140141

142+
logger.info(
143+
"Upload URL requested: upload_id=%s filename=%s expires_in=%s",
144+
upload_id,
145+
params.filename,
146+
data.get("expires_in"),
147+
)
141148
return [TextContent(type="text", text=json.dumps(result))]
142149

143150

@@ -205,19 +212,49 @@ async def proxy_upload(request: Request) -> Response:
205212
engine_url = f"{engine_url}?{request.url.query}"
206213

207214
body = await request.body()
215+
size_bytes = len(body)
208216
headers = {
209217
k: v
210218
for k, v in request.headers.items()
211219
if k.lower() in ("content-type", "content-length")
212220
}
213221

222+
logger.info(
223+
"Upload proxy started: upload_id=%s size_bytes=%d",
224+
upload_id,
225+
size_bytes,
226+
)
227+
214228
try:
215229
async with httpx.AsyncClient(timeout=_PROXY_TIMEOUT) as http:
216230
resp = await http.put(engine_url, content=body, headers=headers)
217231
except httpx.HTTPError as exc:
218-
logger.error("Upload proxy failed: %s", exc)
232+
logger.error("Upload proxy failed: upload_id=%s error=%s", upload_id, exc)
219233
return JSONResponse({"detail": "Upload proxy error"}, status_code=502)
220234

235+
if resp.status_code >= 400:
236+
logger.warning(
237+
"Upload proxy error response: upload_id=%s status=%d body=%s",
238+
upload_id,
239+
resp.status_code,
240+
resp.text[:200],
241+
)
242+
else:
243+
# Parse artifact_id from Engine response for traceability
244+
artifact_id = None
245+
try:
246+
resp_data = resp.json()
247+
artifact_id = resp_data.get("artifact_id")
248+
except Exception:
249+
pass
250+
logger.info(
251+
"Upload proxy completed: upload_id=%s status=%d artifact_id=%s size_bytes=%d",
252+
upload_id,
253+
resp.status_code,
254+
artifact_id,
255+
size_bytes,
256+
)
257+
221258
return Response(
222259
content=resp.content,
223260
status_code=resp.status_code,

0 commit comments

Comments
 (0)