Fix SSE stream corruption by buffering raw bytes#17
Conversation
Previously, the AI provider streams (OpenAI, Anthropic, Gemini) converted individual network chunks directly to strings using `String::from_utf8_lossy`. This caused silent corruption when multi-byte UTF-8 characters or JSON lines were split across network boundaries, dropping text or causing deserialization failures. This commit fixes the issue by buffering raw `Vec<u8>` bytes and splitting on newlines (`\n`) before converting to UTF-8 and processing, ensuring payloads are always complete and intact. Co-authored-by: insign <1113045+insign@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: bf91d5e4dd
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let text = String::from_utf8_lossy(&chunk); | ||
| raw_buf.extend_from_slice(&chunk); | ||
|
|
||
| while let Some(newline_pos) = raw_buf.iter().position(|&b| b == b'\n') { |
There was a problem hiding this comment.
Flush trailing OpenAI SSE bytes at EOF
The new parser in stream_with_options only processes frames when raw_buf contains a \n, so a final data: {...} line without a trailing newline is dropped when the HTTP stream closes. This can happen with EOF-terminated last lines (or intermediaries that trim the final newline), causing the last streamed tokens to be silently lost. The previous text.lines() behavior would still yield an unterminated final line, so this is a regression introduced by the newline-gated loop.
Useful? React with 👍 / 👎.
| let text = String::from_utf8_lossy(&chunk); | ||
| raw_buf.extend_from_slice(&chunk); | ||
|
|
||
| while let Some(newline_pos) = raw_buf.iter().position(|&b| b == b'\n') { |
There was a problem hiding this comment.
Flush trailing Anthropic SSE bytes at EOF
Like the OpenAI path, this loop now parses only newline-terminated records and never drains raw_buf after the stream ends. If the last Anthropic data: event arrives without a terminating newline, it remains buffered and is never deserialized, which drops the final text chunk. This behavior is newly introduced by the raw-byte buffering change and can truncate outputs in EOF-terminated streams.
Useful? React with 👍 / 👎.
Previously, the AI provider streams (OpenAI, Anthropic, Gemini) converted individual network chunks directly to strings using
String::from_utf8_lossy. This caused silent corruption when multi-byte UTF-8 characters or JSON lines were split across network boundaries, dropping text or causing deserialization failures. This commit fixes the issue by buffering rawVec<u8>bytes and splitting on newlines (\n) before converting to UTF-8 and processing, ensuring payloads are always complete and intact.PR created automatically by Jules for task 14698576312227409449 started by @insign