Opening this issue to track a major UX and architectural upgrade we need for the platform. Currently, the app feels a bit too transactional—users paste a link, get a one-off perspective, and that's it. Plus, we're hardcoding API keys, which doesn't scale well for an open-source project.
I'm proposing a shift to an interactive "Reader" interface where users can chat with the article, verify facts with clear citations, and bring their own API keys (BYOK) to keep the project decentralized and free to host.
Proposed Changes:
Bring Your Own Keys (BYOK): Move all model routing to use user-provided .env keys. Users can switch between Gemini and Groq dynamically from the UI, specifying their preferred models (e.g., llama3-70b-8192 or gemini-1.5-pro).
Frontend Redesign: * Overhaul the /perspective route to include a real-time chat interface.
Add proper loading skeletons/spinners for the initial analysis phase.
Add a model toggle in the chat input bar.
Interactive Chat & Memory: Hook up LangGraph's MemorySaver() so the agent remembers the context of the article. Users can ask follow-up questions instead of just getting a static report.
Citations & Fact-Checking: The agent will now return an article_summary and a list of web_search_citations (from the DuckDuckGo tool) alongside the fact checks. The frontend needs to display these citations clearly so users can verify the AI's work.
Checklist:
[ ] Add .env parsing for dynamic BYOK routing in backend.
[ ] Update LangGraph pipeline to use MemorySaver and return citations/summaries.
[ ] Build new React chat interface for /perspective.
[ ] Add provider dropdown (Groq/Gemini) to the frontend chat component.
[ ] Clean up unused packages (NLTK, Google API clients, etc.).
Let me know if anyone has thoughts on the implementation details!
Related to PR1
PR2
Opening this issue to track a major UX and architectural upgrade we need for the platform. Currently, the app feels a bit too transactional—users paste a link, get a one-off perspective, and that's it. Plus, we're hardcoding API keys, which doesn't scale well for an open-source project.
I'm proposing a shift to an interactive "Reader" interface where users can chat with the article, verify facts with clear citations, and bring their own API keys (BYOK) to keep the project decentralized and free to host.
Proposed Changes:
Bring Your Own Keys (BYOK): Move all model routing to use user-provided .env keys. Users can switch between Gemini and Groq dynamically from the UI, specifying their preferred models (e.g., llama3-70b-8192 or gemini-1.5-pro).
Frontend Redesign: * Overhaul the /perspective route to include a real-time chat interface.
Add proper loading skeletons/spinners for the initial analysis phase.
Add a model toggle in the chat input bar.
Interactive Chat & Memory: Hook up LangGraph's MemorySaver() so the agent remembers the context of the article. Users can ask follow-up questions instead of just getting a static report.
Citations & Fact-Checking: The agent will now return an article_summary and a list of web_search_citations (from the DuckDuckGo tool) alongside the fact checks. The frontend needs to display these citations clearly so users can verify the AI's work.
Checklist:
[ ] Add .env parsing for dynamic BYOK routing in backend.
[ ] Update LangGraph pipeline to use MemorySaver and return citations/summaries.
[ ] Build new React chat interface for /perspective.
[ ] Add provider dropdown (Groq/Gemini) to the frontend chat component.
[ ] Clean up unused packages (NLTK, Google API clients, etc.).
Let me know if anyone has thoughts on the implementation details!
Related to PR1
PR2