-
Notifications
You must be signed in to change notification settings - Fork 172
fix: account for cached prompt tokens in OTEL spans #452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
7e585f6
b6cce3d
8dd088e
06f680e
c3bf896
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -251,20 +251,41 @@ def _enrich_response_genai_attrs( | |
| # Usage | ||
| usage = response_data.get("usage", {}) | ||
| if usage: | ||
| attributes.update( | ||
| { | ||
| gen_ai_attributes.GEN_AI_USAGE_INPUT_TOKENS: usage.get( | ||
| "prompt_tokens", 0 | ||
| ), | ||
| gen_ai_attributes.GEN_AI_USAGE_OUTPUT_TOKENS: usage.get( | ||
| "completion_tokens", 0 | ||
| ), | ||
| } | ||
| attributes[gen_ai_attributes.GEN_AI_USAGE_INPUT_TOKENS] = usage.get( | ||
| "prompt_tokens", 0 | ||
| ) | ||
| attributes[gen_ai_attributes.GEN_AI_USAGE_OUTPUT_TOKENS] = usage.get( | ||
| "completion_tokens", 0 | ||
| ) | ||
|
|
||
| cached_input_tokens = _extract_cached_input_tokens(usage) | ||
| if cached_input_tokens is not None: | ||
| attributes[ | ||
| gen_ai_attributes.GEN_AI_USAGE_CACHE_READ_INPUT_TOKENS | ||
| ] = cached_input_tokens | ||
|
|
||
| set_available_attributes(span, attributes) | ||
|
|
||
|
|
||
| def _extract_cached_input_tokens(usage: dict[str, Any]) -> int | None: | ||
| # The generated usage schema currently exposes both plural/singular | ||
| # prompt token details names, plus the legacy top-level cached token count. | ||
| # Prefer the nested cached_tokens value when present. | ||
| prompt_token_details = usage.get("prompt_tokens_details") or usage.get( | ||
| "prompt_token_details" | ||
| ) | ||
| if isinstance(prompt_token_details, dict): | ||
| cached_tokens = prompt_token_details.get("cached_tokens") | ||
| if isinstance(cached_tokens, int): | ||
| return cached_tokens | ||
|
|
||
| num_cached_tokens = usage.get("num_cached_tokens") | ||
| if isinstance(num_cached_tokens, int): | ||
| return num_cached_tokens | ||
|
Comment on lines
+277
to
+284
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How did you arbitrate the priority between the two (prompt token details and number of cached tokens) ?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I made the priority explicit in code in
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok maybe the spec is not dry yet. The Let's wait a bit for this PR, will come back later
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Understood. I am not making a follow-up code change from this comment. The current branch only records |
||
|
|
||
| return None | ||
|
|
||
|
|
||
| def _enrich_create_agent(span: Span, response_data: dict[str, Any]) -> None: | ||
| """Set agent-specific attributes from create_agent response. | ||
|
|
||
|
|
@@ -274,8 +295,7 @@ def _enrich_create_agent(span: Span, response_data: dict[str, Any]) -> None: | |
| gen_ai_attributes.GEN_AI_AGENT_DESCRIPTION: response_data.get("description"), | ||
| gen_ai_attributes.GEN_AI_AGENT_ID: response_data.get("id"), | ||
| gen_ai_attributes.GEN_AI_AGENT_NAME: response_data.get("name"), | ||
| # As of 2026-03-02: in convention, but not yet in opentelemetry-semantic-conventions | ||
| "gen_ai.agent.version": str(response_data.get("version")), | ||
| gen_ai_attributes.GEN_AI_AGENT_VERSION: str(response_data.get("version")), | ||
| gen_ai_attributes.GEN_AI_REQUEST_MODEL: response_data.get("model"), | ||
| gen_ai_attributes.GEN_AI_SYSTEM_INSTRUCTIONS: response_data.get("instructions"), | ||
| } | ||
|
|
||
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Uh oh!
There was an error while loading. Please reload this page.