Skip to content

fix(components): normalize ConditionAgent response content before parsing#6072

Open
jslim-23 wants to merge 2 commits intoFlowiseAI:mainfrom
jslim-23:bugfix/fix-conditionagent-response-content-normalization
Open

fix(components): normalize ConditionAgent response content before parsing#6072
jslim-23 wants to merge 2 commits intoFlowiseAI:mainfrom
jslim-23:bugfix/fix-conditionagent-response-content-normalization

Conversation

@jslim-23
Copy link
Copy Markdown

Summary

ConditionAgent currently assumes response.content is always a string when parsing the model output. In LangChain v1, response.content may also be returned as (ContentBlock | ContentBlock.Text)[], which can cause JSON parsing to fail in ConditionAgent.

This change normalizes the response content before parsing by reusing the existing extractResponseContent() helper, keeping response handling consistent with other parts of the codebase.

The parse error path is also updated to report the same normalized content instead of casting response.content to string again in the catch block.

Changes

Use extractResponseContent(response) before calling parseJsonMarkdown() in packages/components/nodes/agentflow/ConditionAgent/ConditionAgent.ts

Reuse the normalized response content in analytics, parsing, output content, and parse error reporting

How to reproduce the bug

  1. Create an Agentflow with a ConditionAgent node
  2. Use a LangChain v1-compatible chat model that returns response.content as text content blocks
  3. Run the flow
  4. ConditionAgent may fail while parsing the model output as JSON

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the ConditionAgent to extract the LLM response content into a single variable, responseContent, which is then reused for analytics, JSON parsing, and the final output. This change improves consistency and reduces redundant processing. A suggestion was provided to truncate the raw response content in error logs to enhance readability when parsing fails.

`Failed to parse a valid scenario from the LLM's response. Please check if the model is capable of following JSON output instructions. Raw LLM Response: "${
response.content as string
}"`
`Failed to parse a valid scenario from the LLM's response. Please check if the model is capable of following JSON output instructions. Raw LLM Response: "${responseContent}"`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability in error logs, it's a good practice to truncate long raw responses. The responseContent could be quite large, making the error message difficult to read. Consider truncating it if it exceeds a certain length, similar to how it's done in parseJsonBody.

Suggested change
`Failed to parse a valid scenario from the LLM's response. Please check if the model is capable of following JSON output instructions. Raw LLM Response: "${responseContent}"`
"Failed to parse a valid scenario from the LLM's response. Please check if the model is capable of following JSON output instructions. Raw LLM Response: \"" + responseContent.substring(0, 200) + (responseContent.length > 200 ? "..." : "") + "\""
References
  1. Prioritize code readability and understandability over conciseness. Truncating large blobs in logs improves the readability and utility of error messages for debugging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant