Agentic RAG Harness for long documents, Tree and Graph based reasoning. Cited answers down to the pixel
-
Updated
Apr 15, 2026 - Python
Agentic RAG Harness for long documents, Tree and Graph based reasoning. Cited answers down to the pixel
Tree-based, vectorless document RAG framework. Connect any LLM via URL/API key.
AI-first manual checklist builder using PageIndex-style vectorless retrieval + local Gemma4 to generate grounded maintenance checklists with strict citations.
Implements a vectorless RAG architecture using PageIndex APIs and Groq LLMs, enabling efficient document retrieval and response generation without traditional vector databases.
Vectorless RAG using reasoning over hierarchical document structure instead of embeddings or vector databases.
A production-grade, LangGraph-orchestrated fraud detection system built for regulated financial environments. Combines ML risk scoring, LLM-powered document forensics, and a Human-in-the-Loop compliance workflow — end-to-end.
Reasoning-based, vectorless RAG over a large document using a hierarchical tree (PageIndex) and a Vision-Language Model (Llama 4 Scout), no embeddings, no vector store, no text chunking.
Vectorless RAG for SEC 10-K filings using PageIndex — tree-based reasoning retrieval with Claude, no vector DB, no embeddings, no chunking
An enterprise-grade, hybrid Retrieval-Augmented Generation (RAG) pipeline that completely bypasses traditional vector databases.
A scalable, agentic document intelligence system inspired by PageIndex, designed to process long documents and enable reasoning-driven retrieval instead of vector similarity search.
A vector-less RAG works fully in local NO internet needed, with webUI
A retrieval-augmented generation (RAG) system for querying ML/AI research papers using BM25 sparse retrieval — no vector embeddings or external APIs required. Users ask natural language questions and receive grounded answers with citations to the source papers.
Serverless Vectorless RAG on AWS — upload documents, ask questions, get grounded answers using LLM reasoning instead of embeddings or vector databases. Built with Amazon Bedrock (Claude 3 Haiku), Lambda, DynamoDB, API Gateway, React, and Terraform.
🎙️ Vaani RAG — Multi-Language Voice Agent
Vectorless semantic indexing SDK that converts large text into searchable knowledge trees for fast, structured retrieval.
Help common man from long pages of information and tedious searching of information from indian tax rule book. this model will help everyone to get the information at their finger tip and cross check anytime just by asking a question.
⚡ The Agent-Native Retrieval Engine — Hybrid Vector + Reasoning + Memory for AI Agents. HNSW indexing, tree-based reasoning retrieval, multi-agent orchestration, MCP server, and built-in RAG.
Add a description, image, and links to the vectorless-rag topic page so that developers can more easily learn about it.
To associate your repository with the vectorless-rag topic, visit your repo's landing page and select "manage topics."