The first platform that connects computational neuroscience to social simulation for pre-launch advertising analysis.
Demo Video · How It Works · Quick Start · Architecture · Research
The global advertising industry spends $1 trillion annually. The neuromarketing industry — the science of understanding how brains actually respond to ads — gets $1.8 billion. That's 0.2%.
Why? A single fMRI study costs $50-200K, takes 6 months, and scans 20-30 people. By the time results arrive, the campaign has already launched.
Adneural changes that.
Adneural is a three-engine platform that predicts how an advertisement will perform — neurologically and socially — before launch.
| Engine | What It Does | Technology |
|---|---|---|
| Brain Encoding | Predicts cortical brain response to any video/audio/image | Meta TRIBE v2 (trained on 450h of fMRI data) |
| Video Summarizer | Generates scene-by-scene description of video content | LLM with frame extraction (vision-capable) |
| Social Simulation | Models how 200 personality-diverse AI agents react and interact | MiroFish/OASIS multi-agent framework |
| Diagnostic Agents | Analyzes brain + social data + video context, produces actionable reports | LangGraph + Neo4j GraphRAG over 22 papers |
The novel contribution: the bridge between brain encoding and social simulation. We translate predicted cortical activation into personality-modulated emotional states that seed each agent's behavior. No one has connected these systems before.
Case Study: Jaguar "Copy Nothing" Rebrand
We ran the Jaguar rebrand ad through Adneural without knowing the real-world outcome. Our platform predicted:
| Adneural Predicted | What Actually Happened |
|---|---|
| Weak emotional tagging, low memorability | Positive sentiment collapsed from 23% to 8% |
| No narrative processing (pure visual stimulus) | "Where are the cars?" became the meme |
| Narrative vacuum vulnerable to hostile reframing | The void where a car should have been got filled with ridicule |
| Minimal trust formation | No brand equity built. CEO resigned. |
Launch Recommendation: NO GO
Video Upload +-----------------------------+
| | AI Video Summarizer |
| | (scene-by-scene description)|
TRIBE v2 (brain encoding) | Added to every step below |
| +-----------------------------+
148 cortical regions,
second by second
|
+------------------+
| |
+----v-----------+ +--v-------------------+
| Seeding Bridge | | Neuro-Translator |
| | | |
| 10 networks | | Neo4j GraphRAG |
| -> 9D emotion | | 29 papers |
| -> Big Five | | 2,684 findings |
| modulation | | 9,921 relationships |
+-------+--------+ +---------+------------+
| |
+-------v--------+ |
| MiroFish/OASIS | |
| 200 AI agents | |
| Simulated | |
| Reddit | |
+-------+--------+ |
| |
+-------v-----------+ |
| Social Analyst <-----------+
| Cascade detection |
| Sentiment analysis|
+-------+-----------+
|
v
+-----------------------------------+
| Strategist Agent |
| Merges neural + social findings |
| Computes risk scores |
| Generates multi-audience reports |
+-----------------+-----------------+
|
+-----------v-------------+
| Guardrail Check |
| (no manipulation) |
+-----------+-------------+
|
+------+-------+-------+------------+
v v v v
Executive Marketing Compliance Full Research
Report Report Report Report
- Python 3.10+
- Docker (for Neo4j)
- Node.js 18+ (for frontend)
# Clone
git clone https://github.com/OmarMusayev/neuro-social-platform.git
cd neuro-social-platform
# Environment
cp .env.example .env
# Edit .env with your API keys
# Install Python dependencies
pip install -r requirements.txt
# Start Neo4j
docker compose up -d
# Build the knowledge graph (first time only)
python -m rag.build_graph --papers-dir ./research/papers/
# Run analysis on the Jaguar demo
python -m scripts.run_analysis \
--tribe-data ./data/demo/jaguar_tribe_output.json \
--simulation-data ./data/demo/jaguar_simulation_output.json \
--video ./path/to/jaguar_ad.mp4 \
--stimulus "Jaguar Copy Nothing rebrand commercial" \
--audience marketing \
--output ./output/report.json
# Start the frontend
cd frontend && npm install && npm run dev# LLM (OpenRouter recommended)
LLM_PROVIDER=openrouter
LLM_API_KEY=your-key-here
LLM_MODEL=qwen/qwen3-235b-a22b
LLM_BASE_URL=https://openrouter.ai/api/v1
# Neo4j GraphRAG
NEO4J_RAG_URI=bolt://localhost:7688
NEO4J_RAG_USER=neo4j
NEO4J_RAG_PASSWORD=neurosocial123The bridge is our novel contribution. It translates brain activation into agent emotional states:
- Network Aggregation: 148 cortical parcels -> 8 functional brain networks
- Emotional Computation: Network activations -> 9D emotional state vector (anxiety, trust, excitement, discomfort, memorability, etc.)
- Personality Modulation: Base emotional state x Big Five personality traits -> individualized response. A neurotic viewer gets anxiety. An extraverted viewer gets curiosity. Same brain data, different reactions.
- Population Distribution: 200 agents across 6 archetypes (enthusiastic sharer, empathetic worrier, skeptical analyst, hostile critic, average viewer, emotional storyteller)
Our brain-to-emotion mapping is grounded in peer-reviewed neuroscience literature:
- Corbetta & Shulman (2002) — Attention networks
- Kanwisher et al. (1997) — Face processing
- Klucharev et al. (2008) — Persuasion and memory encoding
- Genevsky et al. (2025) — Neuroforecasting
- Full list of 22 papers
TRIBE v2 outputs cortical surface data (~20,484 vertices). Subcortical structures (amygdala, hippocampus, nucleus accumbens) are inferred from cortical proxy patterns with explicit confidence tagging, not directly measured. This is documented honestly throughout our codebase and reports.
| Component | Technology | Purpose |
|---|---|---|
| Brain Encoding | Meta TRIBE v2 | Predict fMRI-like cortical activation from media |
| Video Summarizer | LLM + ffmpeg frame extraction | AI-generated scene-by-scene video description |
| Atlas Mapping | Destrieux (aparc.a2009s) | Aggregate vertices into named brain regions |
| Knowledge Graph | Neo4j + sentence-transformers | GraphRAG over 22 neuroscience papers |
| Emotional Bridge | Custom Python | Cortical activation -> 9D emotional state |
| Social Simulation | MiroFish/OASIS | Multi-agent social media simulation |
| Diagnostic Agents | LangGraph | Three-agent analysis pipeline + video context |
| LLM Backend | OpenRouter (Qwen 3) | Model-agnostic, swappable via env vars |
| Frontend | Next.js 15 / React 19 / Three.js | Dashboard and demo interface |
neuro-social-platform/
├── tribe/ # TRIBE v2 brain encoding integration
├── bridge/ # Brain data -> emotional seeding bridge
├── simulation/ # MiroFish/OASIS simulation adapters
├── rag/ # Neo4j GraphRAG pipeline
├── agents/ # LangGraph diagnostic agents
├── api/ # FastAPI backend
├── frontend/ # Next.js 15 dashboard (React, Three.js, Tailwind)
├── scripts/ # CLI tools and utility scripts
├── data/ # Brain architecture + demo data
├── research/ # Analysis outputs and paper references
├── cluster/ # HPC cluster scripts (TRIBE v2)
├── demo_visuals/ # Auto-animated visual assets
├── docs/ # Architecture and pipeline documentation
└── tests/ # Integration tests
Adneural generates four audience-specific reports from the same analysis:
| Report | Audience | Focus |
|---|---|---|
| Executive | C-suite | Go/no-go recommendation with risk dashboard |
| Marketing | Creative team | Timestamped edit recommendations with neural rationale |
| Compliance | Legal/ethics | Mental health impact assessment, regulatory flags |
| Full Research | Technical | Complete neural-social bridge analysis with paper citations |
Built at Purdue Catapult Hackathon by:
- Omar Musayev
- Pranjal Bhatia
- Ved Karamnchandi
- Aadit Kedia
- Piyush Dua
- Rishab Shenai
MIT — see LICENSE for details.
