forked from kuzudb/kuzu
-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathgraphpalace-paper.tex
More file actions
1331 lines (1164 loc) · 65.8 KB
/
graphpalace-paper.tex
File metadata and controls
1331 lines (1164 loc) · 65.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[11pt,a4paper]{article}
% ── Packages ─────────────────────────────────────────────────────────────────
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{pmboxdraw}
\usepackage{lmodern}
\usepackage[margin=2.5cm]{geometry}
\usepackage{amsmath,amssymb,amsthm}
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage{array}
\usepackage{multirow}
\usepackage{xcolor}
\usepackage{listings}
\usepackage{hyperref}
\usepackage{microtype}
\usepackage{parskip}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{enumitem}
\usepackage{url}
\usepackage{cite}
% ── Hyperref setup ────────────────────────────────────────────────────────────
\hypersetup{
colorlinks=true,
linkcolor=blue!70!black,
citecolor=green!50!black,
urlcolor=cyan!70!black,
pdfauthor={web3guru888},
pdftitle={GraphPalace: A Stigmergic Memory Palace Engine for AI Agents},
pdfsubject={AI Memory Systems, Graph Databases, Stigmergy, Active Inference},
pdfkeywords={memory palace, stigmergy, active inference, A* pathfinding,
knowledge graph, AI agents, Rust, Kùzu}
}
% ── Listings / code blocks ────────────────────────────────────────────────────
\definecolor{codebg}{HTML}{F5F5F5}
\definecolor{codeframe}{HTML}{CCCCCC}
\definecolor{keywordcolor}{HTML}{0033BB}
\definecolor{commentcolor}{HTML}{008800}
\definecolor{stringcolor}{HTML}{BB0000}
\lstset{
backgroundcolor=\color{codebg},
frame=single,
rulecolor=\color{codeframe},
basicstyle=\ttfamily\footnotesize,
keywordstyle=\color{keywordcolor}\bfseries,
commentstyle=\color{commentcolor}\itshape,
stringstyle=\color{stringcolor},
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2,
xleftmargin=0.5em,
xrightmargin=0.5em,
aboveskip=0.5em,
belowskip=0.5em
}
\lstdefinelanguage{Cypher}{
keywords={CREATE, NODE, TABLE, REL, FROM, TO, MATCH, WHERE, RETURN,
WITH, ORDER, BY, LIMIT, CALL, MERGE, SET, DELETE, DETACH,
AS, AND, OR, NOT, IN, IS, NULL, FLOAT, STRING, INT64,
TIMESTAMP, DEFAULT, PRIMARY, KEY},
morecomment=[l]{--},
morestring=[b]{"},
morestring=[b]{'},
sensitive=false
}
\lstdefinelanguage{Rust}{
keywords={fn, let, mut, pub, use, struct, impl, trait, enum, match,
if, else, for, while, loop, return, async, await, mod,
type, where, self, Self, super, crate, extern, const,
static, unsafe, move, ref, in, as, true, false, Some,
None, Ok, Err, Vec, String, usize, f64, f32, u64, i64,
bool},
morecomment=[l]{//},
morecomment=[s]{/*}{*/},
morestring=[b]{"}{"},
sensitive=true
}
% ── Math helpers ─────────────────────────────────────────────────────────────
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator{\softmax}{softmax}
\newtheorem{definition}{Definition}
% ── Title block ──────────────────────────────────────────────────────────────
\title{\textbf{GraphPalace: A Stigmergic Memory Palace Engine for AI Agents}\\[0.5em]
\large Technical Report --- GraphPalace v0.1.0}
\author{web3guru888 \\
\textit{GraphPalace Project} \\
\href{https://github.com/web3guru888/GraphPalace}{\texttt{github.com/web3guru888/GraphPalace}}}
\date{April 2026}
% ─────────────────────────────────────────────────────────────────────────────
\begin{document}
\maketitle
% ── Abstract ─────────────────────────────────────────────────────────────────
\begin{abstract}
We present \emph{GraphPalace}, an embedded graph database that makes the memory
palace metaphor computationally real. AI agents require persistent, retrievable,
and self-organising memory; today's systems are either passive vector stores that
cannot navigate semantic relationships, or cloud-dependent services that
exfiltrate private data. GraphPalace addresses both limitations by fusing four
previously separate technologies: a property-graph palace hierarchy
(Palace~$\to$~Wing~$\to$~Room~$\to$~Closet~$\to$~Drawer), a five-type
pheromone stigmergy system adapted from STAN\_X~v8, composite Semantic~A$^*$
pathfinding with a 40/30/30 cost decomposition, and Active Inference agents
that minimise Expected Free Energy. All components run \emph{fully locally}---
in a native Rust binary, a WebAssembly~(WASM) browser bundle, or a Python
extension---with no cloud calls and no API keys.
We benchmark GraphPalace~v0.1.0 against known performance targets. With the
\texttt{InMemoryBackend} and deterministic mock embeddings, recall@10 reaches
54\% (reflecting the absence of semantic similarity in FNV-1a hashes); switching
to the TF-IDF engine immediately raises recall@10 to 96--100\%, matching the
all-MiniLM-L6-v2 recall that MemPalace reports on LongMemEval (96.6\%).
Semantic~A$^*$ achieves a \textbf{100\% success rate} on both same-wing and
cross-wing paths at 8--21~µs and 5--13~µs respectively---10,000$\times$ and
38,000$\times$ under the \textless200~ms / \textless500~ms targets derived from
STAN\_X~v8 benchmarks. Insert throughput is 32\,000--50\,000 drawers/second;
pheromone decay processes 108\,915 cycles/second at 100 drawers. The system
comprises 13 Rust crates, 680 tests (0 failures, 0 Clippy warnings), and
approximately 21,167~lines of original Rust.
\end{abstract}
\vspace{1em}
\noindent\textbf{Keywords:} memory palace, stigmergy, active inference, semantic
A$^*$, knowledge graph, AI memory systems, Rust, WebAssembly, pheromone
navigation.
\tableofcontents
\newpage
% ─────────────────────────────────────────────────────────────────────────────
\section{Introduction}
\label{sec:intro}
% ─────────────────────────────────────────────────────────────────────────────
Long-term memory is central to useful AI agents. Without persistent, structured
memory an agent cannot build on past work, form hypotheses across sessions, or
recall fine-grained facts. Current approaches fall into one of two failure
modes.
\paragraph{Passive vector stores} (ChromaDB, pgvector, Pinecone) treat memory
as a flat embedding space. Search reduces to a single cosine-similarity scan
with no notion of how concepts relate, how paths should be navigated, or how
recent success should influence future retrieval. They are purely reactive:
they cannot discover connections the query does not already specify.
\paragraph{Cloud-dependent graph services} (Mem0, Zep/Graphiti) add
entity-resolution and graph traversal but require network round-trips, charge
per query, and send user data to third-party servers. At \$19--249/month for
Mem0 and \$25+/month for Zep they are inaccessible to resource-constrained
deployments, and their latency (\textgreater500~ms per retrieval) rules them out
for interactive agents.
\paragraph{The gap.}
No publicly available system simultaneously provides:
\begin{enumerate}[nosep]
\item \emph{Self-optimising retrieval}---memory that improves with use.
\item \emph{Spatial navigation}---graph-based traversal of semantic structure.
\item \emph{Fully local execution}---browser, server, or edge device with no
cloud dependency.
\item \emph{Active Inference agents}---explorers that discover new knowledge
without explicit query.
\end{enumerate}
\paragraph{Our contribution.}
GraphPalace bridges this gap by making the ancient \emph{Method of Loci}
\cite{yates1966} computationally real with a modern property graph database,
a five-type pheromone stigmergy system, and Active Inference agents.
The key design principles are:
\begin{itemize}[nosep]
\item \textbf{Verbatim storage}---drawers hold the original text, never a
summary; retrieval success beats extraction \cite{jovovich2026}.
\item \textbf{Stigmergic self-organisation}---pheromone trails deposited by
successful navigations guide future search, emergently optimising the
retrieval landscape \cite{grasse1959,dorigo1996}.
\item \textbf{Semantic A$^*$}---composite cost model balancing meaning,
collective intelligence, and graph topology to navigate between
arbitrary palace locations.
\item \textbf{Active Inference}---agents minimise Expected Free Energy (EFE)
\cite{friston2010}, balancing epistemic curiosity with pragmatic
goal-directedness to populate and refine the palace autonomously.
\end{itemize}
\paragraph{Paper organisation.}
Section~\ref{sec:related} surveys related work.
Section~\ref{sec:arch} describes the system architecture.
Section~\ref{sec:impl} covers the Rust implementation.
Section~\ref{sec:eval} presents experimental evaluations.
Section~\ref{sec:discuss} discusses results, limitations, and future work.
Section~\ref{sec:conclusion} concludes.
% ─────────────────────────────────────────────────────────────────────────────
\section{Related Work}
\label{sec:related}
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Memory Palace / Method of Loci}
The Method of Loci is attributed to the Greek poet Simonides of Ceos
($\approx$500~BC) and described in detail by Cicero and Quintilian.
Yates~\cite{yates1966} provides the canonical modern survey. Cognitive science
confirms that spatial encoding dramatically enhances recall: information
associated with familiar locations is retrieved at 2--3$\times$ higher accuracy
than list-form storage \cite{maguire2003}. GraphPalace operationalises this as
an explicit hierarchical graph---Wings, Rooms, Closets, Drawers---with every
memory anchored to a navigable location.
\subsection{MemPalace}
Jovovich and Sigman~\cite{jovovich2026} introduced the software memory palace
as an AI memory architecture. MemPalace implements the same Wing/Room/Closet/
Drawer hierarchy backed by ChromaDB (semantic search) and SQLite
(knowledge graph), exposed via a 19-tool MCP server. Its key insight---storing
verbatim content rather than LLM-extracted summaries---yields 96.6\% recall@10
on LongMemEval. GraphPalace inherits this principle directly and extends the
architecture with graph-native storage, stigmergic navigation, and Active
Inference.
\subsection{AI Memory Systems}
Mem0 \cite{mem0_2024} extracts facts from conversations using an LLM and stores
them in a proprietary cloud graph. Retrieval is also LLM-mediated, producing
$\approx$85\% recall at 500~ms+ latency. Zep/Graphiti \cite{zep_2023} build a
Neo4j knowledge graph from conversation history; entity resolution and graph
traversal give coherent structured memory but require cloud Neo4j and a
subscription. LangChain memory \cite{langchain_2022} provides simple buffer and
summary primitives without graph structure or semantic navigation.
\subsection{Stigmergy}
Grass\'e~\cite{grasse1959} coined \emph{stigmergy} to describe indirect
coordination through environment modification, observed in termite nest-building.
Dorigo, Maniezzo, and Colorni~\cite{dorigo1996} formalised Ant Colony
Optimisation (ACO), where pheromone trails guide shortest-path discovery.
STAN\_X~v8~\cite{stanx2026} applied stigmergy to knowledge graph navigation
using five pheromone types and a position-weighted deposit scheme; GraphPalace
adapts these designs for the palace hierarchy.
\subsection{Active Inference}
Friston~\cite{friston2010} proposed the Free Energy Principle as a unified
theory of brain function: biological agents act to minimise surprise (free
energy) about their environment. Active Inference operationalises this as
Expected Free Energy (EFE) minimisation over action sequences, balancing
epistemic value (information gain) with pragmatic value (goal proximity).
STAN\_X~v8 applied Active Inference to knowledge graph exploration;
GraphPalace extends this with five specialised agent archetypes and explicit
temperature-controlled exploration schedules.
\subsection{Graph Databases}
Kùzu~\cite{feng2023kuzu} is an embedded property graph database written in C++
with a Cypher query language, native HNSW vector index, full-text search, ACID
transactions, and first-class WebAssembly support. Its MIT licence and
self-contained architecture make it ideal for local-first AI systems.
GraphPalace uses Kùzu as its storage backend (feature-gated FFI; in-memory
backend for testing and WASM). Neo4j \cite{neo4j_2012} provides similar graph
capabilities but requires a server and a paid licence for production use.
\subsection{Sparse Random Projections}
Achlioptas~\cite{achlioptas2003} proved that sparse $\{-1, 0, 1\}$ random
matrices preserve inner products with high probability, yielding O(nnz) matrix
multiplication instead of O(nd). GraphPalace uses this result for
dimensionality reduction in the TF-IDF embedding engine, implemented in pure
Rust without external model files.
\subsection{A$^*$ Pathfinding}
Hart, Nilsson, and Raphael~\cite{hart1968} introduced the A$^*$ algorithm with
an admissible heuristic guarantee. STAN\_X~v8 reported A$^*$ latency of
211~ms (cached) and 494~ms (uncached) on an RDF knowledge graph. GraphPalace's
palace hierarchy creates short, well-structured paths that allow A$^*$ to
converge in 2--5 iterations, yielding 10,000$\times$ lower latency.
% ─────────────────────────────────────────────────────────────────────────────
\section{System Architecture}
\label{sec:arch}
% ─────────────────────────────────────────────────────────────────────────────
GraphPalace decomposes into seven subsystems:
the palace graph schema (§\ref{sec:schema}),
the stigmergy engine (§\ref{sec:stigmergy}),
Semantic~A$^*$ pathfinding (§\ref{sec:astar}),
Active Inference agents (§\ref{sec:agents}),
the swarm coordinator (§\ref{sec:swarm}),
the embedding engine (§\ref{sec:embeddings}),
the MCP tool interface (§\ref{sec:mcp}),
and the storage abstraction (§\ref{sec:storage}).
Figure~\ref{fig:arch} shows the layered crate architecture.
\begin{figure}[ht]
\centering
\begin{minipage}{0.88\textwidth}
\begin{verbatim}
┌──────────────────────────────────────────────────────────────────┐
│ gp-bench (Benchmarks) │
│ Recall · Pathfinding · Throughput · Comparison │
├──────────────────────────────────────────────────────────────────┤
│ gp-palace (Orchestrator) │
│ GraphPalace struct · Search · Navigate · Export/Import │
├──────────────────────────────────────────────────────────────────┤
│ gp-mcp (MCP Server) │
│ 28 tools · JSON-RPC 2.0 · PALACE_PROTOCOL │
├──────────┬──────────────┬─────────────┬──────────┬──────────────┤
│ gp-core │ gp-stigmergy │gp-pathfinding│ gp-agents│gp-embeddings │
│ Types │ 5 Pheromone │ Semantic │ Active │ ONNX / TF- │
│ Schema │ Decay+Cost │ A* │ Inference│ IDF / Mock │
├──────────┴──────────────┴─────────────┴──────────┴──────────────┤
│ gp-swarm │
│ SwarmCoordinator · ConvergenceDetector · InterestScore │
├──────────────────────────────────────────────────────────────────┤
│ gp-storage (FFI) │
│ StorageBackend trait · InMemoryBackend · KuzuBackend (C API) │
├──────────────────────────────────────────────────────────────────┤
│ gp-wasm │
│ InMemoryPalace · JS API · Web Workers · IndexedDB/OPFS │
├──────────────────────────────────────────────────────────────────┤
│ Kùzu Graph Database (C++20) │
│ Cypher · HNSW Vector Index · FTS · ACID · WASM │
└──────────────────────────────────────────────────────────────────┘
\end{verbatim}
\end{minipage}
\caption{GraphPalace layered crate architecture. Lower layers are more
fundamental; upper layers orchestrate. \texttt{gp-wasm} compiles the entire
stack (except the Kùzu FFI) to WebAssembly for browser execution.}
\label{fig:arch}
\end{figure}
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Palace Graph Schema}
\label{sec:schema}
% ─────────────────────────────────────────────────────────────────────────────
The palace is a \emph{property graph} with 7 node types and 11 edge types.
Every node carries two pheromone scalars (exploitation, exploration); every
non-hierarchical edge carries three (success, traversal, recency).
\paragraph{Node types.}
\begin{itemize}[nosep]
\item \textbf{Palace} --- top-level container; exactly one per database.
\item \textbf{Wing} --- major domain (e.g.\ ``projects'', ``people'', ``topics'').
\item \textbf{Room} --- subject within a wing; connected via Halls and Tunnels.
\item \textbf{Closet} --- summary container for a group of drawers.
\item \textbf{Drawer} --- fundamental memory unit; stores verbatim original
content with a 384-dimensional embedding.
\item \textbf{Entity} --- knowledge-graph node (person, concept, event, place).
\item \textbf{Agent} --- specialist navigator with domain focus and persistent
diary.
\end{itemize}
The full Cypher Data Definition Language is shown in
Listing~\ref{lst:cypher_schema}; selected representative definitions are given
below.
\begin{lstlisting}[language=Cypher, caption={Selected Cypher DDL for palace node and edge types.}, label={lst:cypher_schema}]
-- Core memory unit
CREATE NODE TABLE Drawer(
id STRING PRIMARY KEY,
content STRING, -- Verbatim text (NEVER summarised)
embedding FLOAT[384], -- all-MiniLM-L6-v2 (384-dim)
source STRING,
importance FLOAT DEFAULT 0.5,
exploitation_pheromone FLOAT DEFAULT 0.0,
exploration_pheromone FLOAT DEFAULT 0.0,
created_at TIMESTAMP DEFAULT current_timestamp(),
accessed_at TIMESTAMP DEFAULT current_timestamp()
)
-- Knowledge-graph edges with temporal validity and pheromones
CREATE REL TABLE RELATES_TO(FROM Entity TO Entity,
predicate STRING,
confidence FLOAT DEFAULT 0.5,
base_cost FLOAT DEFAULT 1.0,
current_cost FLOAT DEFAULT 1.0,
success_pheromone FLOAT DEFAULT 0.0,
traversal_pheromone FLOAT DEFAULT 0.0,
recency_pheromone FLOAT DEFAULT 0.0,
valid_from TIMESTAMP,
valid_to TIMESTAMP, -- NULL = currently valid
observed_at TIMESTAMP DEFAULT current_timestamp()
)
-- Same-wing corridor
CREATE REL TABLE HALL(FROM Room TO Room,
hall_type STRING,
base_cost FLOAT DEFAULT 0.5,
current_cost FLOAT DEFAULT 0.5,
success_pheromone FLOAT DEFAULT 0.0,
traversal_pheromone FLOAT DEFAULT 0.0,
recency_pheromone FLOAT DEFAULT 0.0
)
-- Cross-wing passage
CREATE REL TABLE TUNNEL(FROM Room TO Room,
base_cost FLOAT DEFAULT 0.7,
current_cost FLOAT DEFAULT 0.7,
success_pheromone FLOAT DEFAULT 0.0,
traversal_pheromone FLOAT DEFAULT 0.0,
recency_pheromone FLOAT DEFAULT 0.0
)
\end{lstlisting}
Automatic \texttt{SIMILAR\_TO} edges are created between drawers whose
embeddings exceed a configurable cosine threshold, forming a semantic
similarity sub-graph traversable by A$^*$.
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Stigmergy System}
\label{sec:stigmergy}
% ─────────────────────────────────────────────────────────────────────────────
GraphPalace uses five pheromone types, adapted from STAN\_X~v8~\cite{stanx2026},
listed in Table~\ref{tab:pheromones}. Pheromones are real-valued scalars stored
on nodes and edges in the graph and updated after every navigation event.
\begin{table}[ht]
\centering
\caption{The five GraphPalace pheromone types. Decay rate~$\rho$ governs
exponential half-life.}
\label{tab:pheromones}
\setlength{\tabcolsep}{6pt}
\begin{tabular}{llllcc}
\toprule
\textbf{Type} & \textbf{Carried by} & \textbf{Signal} & \textbf{Decay $\rho$} & \textbf{Half-life (cycles)} \\
\midrule
Exploitation & Nodes & ``This location is valuable'' & 0.02 & $\approx$35 \\
Exploration & Nodes & ``Already searched here'' & 0.05 & $\approx$14 \\
Success & Edges & ``This path led to a result'' & 0.01 & $\approx$69 \\
Traversal & Edges & ``Frequently used path'' & 0.03 & $\approx$23 \\
Recency & Edges & ``Used recently'' & 0.10 & $\approx$7 \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Exponential decay.}
Every pheromone decays each cycle according to:
\begin{equation}
\phi(t+1) = \phi(t) \cdot (1 - \rho)
\label{eq:decay}
\end{equation}
where $\rho \in (0,1)$ is the per-type decay rate from Table~\ref{tab:pheromones}.
The half-life follows $t_{1/2} = \ln 2 / \rho$.
\paragraph{Position-weighted deposit.}
When A$^*$ finds a successful path $p = (e_1, e_2, \ldots, e_k)$ of length $k$,
pheromone is deposited along the path with a reward that diminishes toward the
end of the path, reinforcing early routing decisions more strongly:
\begin{equation}
\Delta\phi(e_i) = r_{\text{base}} \cdot \left(1 - \frac{i-1}{k}\right)
\label{eq:deposit}
\end{equation}
where $r_{\text{base}}$ is a configurable base reward.
\paragraph{Edge cost recomputation.}
After each deposit-or-decay event the composite pheromone factor for an edge is:
\begin{equation}
\Phi_e = 0.5 \cdot \min(\phi_{\text{success}}, 1)
+ 0.3 \cdot \min(\phi_{\text{recency}}, 1)
+ 0.2 \cdot \min(\phi_{\text{traversal}}, 1)
\label{eq:phi}
\end{equation}
and the current edge cost becomes:
\begin{equation}
c_{\text{current}}(e) = c_{\text{base}}(e) \cdot (1 - 0.5\,\Phi_e)
\label{eq:edgecost}
\end{equation}
Well-travelled, successful edges thus become cheaper for future navigations,
creating a self-organising gradient toward proven paths.
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Semantic A$^*$ Pathfinding}
\label{sec:astar}
% ─────────────────────────────────────────────────────────────────────────────
GraphPalace's pathfinding engine (\texttt{gp-pathfinding}) extends classic
A$^*$~\cite{hart1968} with a three-component composite edge cost and an
adaptive heuristic.
\paragraph{Composite cost model.}
The cost of traversing edge $e$ is:
\begin{equation}
\text{cost}(e) = \alpha\,C_{\text{semantic}}(e)
+ \beta\,C_{\text{pheromone}}(e)
+ \gamma\,C_{\text{structural}}(e)
\label{eq:astar_cost}
\end{equation}
where the three components are:
\begin{itemize}[nosep]
\item $C_{\text{semantic}}$ --- cosine distance between the embedding of the
destination node and the query embedding (how semantically far this step
moves from the goal).
\item $C_{\text{pheromone}}$ --- $1 - \Phi_e$ (Eq.~\ref{eq:phi}); low cost on
well-reinforced paths.
\item $C_{\text{structural}}$ --- a fixed base cost per edge type
(HALL: 0.5, TUNNEL: 0.7, HAS\_ROOM: 0.3, etc.), encoding the
expected traversal effort.
\end{itemize}
Default weights are $(\alpha, \beta, \gamma) = (0.40, 0.30, 0.30)$, derived
from STAN\_X~v8 benchmarks. These weights adapt to query context as shown in
Table~\ref{tab:weights}.
\begin{table}[ht]
\centering
\caption{Context-adaptive A$^*$ cost weights $(\alpha, \beta, \gamma)$.}
\label{tab:weights}
\begin{tabular}{lccc}
\toprule
\textbf{Context} & $\alpha$ (Semantic) & $\beta$ (Pheromone) & $\gamma$ (Structural) \\
\midrule
Default & 0.40 & 0.30 & 0.30 \\
Hypothesis Testing & 0.30 & 0.40 & 0.30 \\
Exploratory Research & 0.50 & 0.20 & 0.30 \\
Evidence Gathering & 0.35 & 0.35 & 0.30 \\
Memory Recall & 0.50 & 0.30 & 0.20 \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Adaptive heuristic.}
Within a single wing, the heuristic trusts semantic similarity; across wings,
it falls back to hop-count distance, preventing the heuristic from being
inadmissible when embeddings from different domains are not directly comparable.
Algorithm~\ref{alg:astar} gives the full pathfinding procedure.
\begin{algorithm}[ht]
\caption{Semantic A$^*$ in GraphPalace}
\label{alg:astar}
\begin{algorithmic}[1]
\Require Graph $G$, start $s$, goal $g$, cost weights $(\alpha,\beta,\gamma)$
\Ensure Shortest path $p$ from $s$ to $g$, or \textsc{None}
\State $\text{open} \gets \{s\}$; $g\_score[s] \gets 0$
\State $f\_score[s] \gets h(s, g)$; $\text{came\_from} \gets \emptyset$
\While{$\text{open} \neq \emptyset$}
\State $u \gets \argmin_{v \in \text{open}} f\_score[v]$
\If{$u = g$} \Return \textsc{ReconstructPath}($\text{came\_from}, g$) \EndIf
\State $\text{open} \gets \text{open} \setminus \{u\}$;
$\text{closed} \gets \text{closed} \cup \{u\}$
\For{each edge $e = (u, v) \in G$}
\If{$v \in \text{closed}$} \textbf{continue} \EndIf
\State $c \gets \alpha\,C_{\text{sem}}(e) + \beta\,C_{\text{phr}}(e) + \gamma\,C_{\text{str}}(e)$
\State $g' \gets g\_score[u] + c$
\If{$g' < g\_score[v]$}
\State $\text{came\_from}[v] \gets u$; $g\_score[v] \gets g'$
\State $f\_score[v] \gets g' + h(v, g)$
\State $\text{open} \gets \text{open} \cup \{v\}$
\EndIf
\EndFor
\EndWhile
\Return \textsc{None}
\end{algorithmic}
\end{algorithm}
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Active Inference Agents}
\label{sec:agents}
% ─────────────────────────────────────────────────────────────────────────────
Agents in GraphPalace are implementations of Active Inference \cite{friston2010}
that navigate the palace to minimise uncertainty about their goal domain.
\paragraph{Expected Free Energy.}
For a candidate action $a$ leading to node $n$, the EFE score is:
\begin{equation}
\text{EFE}(n) = -\Bigl(\underbrace{V_{\text{epistemic}}(n)}_{\text{learn}}
+ \underbrace{V_{\text{pragmatic}}(n)}_{\text{goal}}
+ \underbrace{Q_{\text{edge}}(n)}_{\text{social signal}}\Bigr)
\label{eq:efe}
\end{equation}
where:
\begin{align}
V_{\text{epistemic}}(n) &= \frac{1}{\pi_n} \quad (\pi_n = \text{precision of belief about } n)
\label{eq:epistemic}\\
V_{\text{pragmatic}}(n) &= \cos(\mathbf{e}_n,\, \mathbf{g}) \quad (\mathbf{g} = \text{goal embedding})
\label{eq:pragmatic}\\
Q_{\text{edge}}(n) &= \phi_{\text{exploit}}(n) - \phi_{\text{explore}}(n)
\label{eq:social}
\end{align}
The agent selects action $a^*$ via a softmax policy:
\begin{equation}
\Pr(a^* = a_i) = \frac{\exp(-\text{EFE}(n_i) / \tau)}{\sum_j \exp(-\text{EFE}(n_j) / \tau)}
\label{eq:softmax}
\end{equation}
where temperature $\tau$ controls the exploration--exploitation trade-off.
\paragraph{Bayesian belief update.}
After visiting node $n$ and observing evidence $o$, the agent updates its
precision-weighted belief:
\begin{equation}
\pi_n^{(t+1)} = \pi_n^{(t)} + \Delta\pi \cdot \mathbb{1}[o \text{ is relevant}]
\label{eq:belief}
\end{equation}
Beliefs are initialised to low precision (high uncertainty) and increase with
confirmatory observations, reducing future epistemic value and shifting the
agent toward exploitation.
\paragraph{Agent archetypes.}
Table~\ref{tab:archetypes} lists the five pre-configured archetypes; temperature
may follow linear, exponential, or cosine annealing schedules over the swarm
lifetime.
\begin{table}[ht]
\centering
\caption{GraphPalace Active Inference agent archetypes.}
\label{tab:archetypes}
\begin{tabular}{llcp{5.5cm}}
\toprule
\textbf{Archetype} & \textbf{Strategy} & \textbf{Temp~$\tau$} & \textbf{Description} \\
\midrule
Explorer & Pure epistemic & 1.0 & Seeks new rooms; expands palace frontier \\
Exploiter & Follow proven trails & 0.1 & Retrieves known memories quickly \\
Balanced & Mixed & 0.5 & Default; balances both goals \\
Specialist & Wing manager & 0.3 & Deep focus on one wing; maintains diary \\
Generalist & Cross-domain & 0.7 & Finds tunnels between domains \\
\bottomrule
\end{tabular}
\end{table}
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Swarm Coordination}
\label{sec:swarm}
% ─────────────────────────────────────────────────────────────────────────────
The \texttt{SwarmCoordinator} (\texttt{gp-swarm}) orchestrates multiple agents
over repeated cycles (Algorithm~\ref{alg:swarm}).
\begin{algorithm}[ht]
\caption{GraphPalace Swarm Coordination Cycle}
\label{alg:swarm}
\begin{algorithmic}[1]
\Require Agents $A = \{a_1,\ldots,a_m\}$, palace $\mathcal{P}$, max cycles $T$
\State Initialise beliefs, assign archetypes, load frontier $F$
\For{$t = 1, 2, \ldots, T$}
\State \textbf{SENSE}: For each $a \in A$, compute interest scores over $F$
\Statex \quad\quad\quad\quad $I(n, a) = w_1 V_{\text{epistemic}}(n) + w_2 V_{\text{pragmatic}}(n,a) + w_3 Q_{\text{edge}}(n)$
\State \textbf{DECIDE}: Each agent selects action via Eq.~(\ref{eq:softmax})
\State \textbf{ACT}: Agents expand graph (add drawers, traverse edges)
\State \textbf{UPDATE}: Deposit pheromones, update beliefs via Eq.~(\ref{eq:belief})
\If{$t \bmod \delta = 0$} Decay all pheromones via Eq.~(\ref{eq:decay}) \EndIf
\If{\textsc{Converged}($\mathcal{P}$)} \textbf{break} \EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
Convergence is declared when at least two of three criteria are met:
\begin{enumerate}[nosep]
\item Palace growth rate below threshold $\theta_g$ (default: $5\%$ new nodes per window).
\item Pheromone variance $\text{Var}(\Phi) < \theta_v$ (default: 0.05).
\item Frontier size $|F| < \theta_f$ (default: 10).
\end{enumerate}
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Embedding Engine}
\label{sec:embeddings}
% ─────────────────────────────────────────────────────────────────────────────
The \texttt{EmbeddingEngine} trait (\texttt{gp-embeddings}) provides a
pluggable interface with three backends:
\begin{enumerate}[nosep]
\item \textbf{MockEmbeddingEngine} --- deterministic FNV-1a hash to a 384-dim
L2-normalised vector. Used in tests and benchmarks where
reproducibility is more important than semantics.
\item \textbf{TfIdfEmbeddingEngine} --- TF-IDF weighted term vectors projected
to 384 dimensions via sparse random matrices (Achlioptas
2003~\cite{achlioptas2003}). Pure Rust; no model files; captures
lexical similarity.
\item \textbf{OnnxEmbeddingEngine} (optional, \texttt{onnx} feature) ---
ONNX Runtime backend for all-MiniLM-L6-v2 \cite{reimers2019}, producing
the same 384-dim embeddings used by MemPalace. This is the production
backend; it requires the model file to be downloaded separately.
\end{enumerate}
All three backends implement cosine similarity search and an LRU cache to
avoid redundant inference.
% ─────────────────────────────────────────────────────────────────────────────
\subsection{MCP Tool Interface}
\label{sec:mcp}
% ─────────────────────────────────────────────────────────────────────────────
GraphPalace ships a 28-tool MCP~\cite{anthropic_mcp2024} server (\texttt{gp-mcp})
using JSON-RPC~2.0. Table~\ref{tab:mcp_tools} groups tools by category. The
server exposes a \texttt{PALACE\_PROTOCOL} prompt generated by
\texttt{palace\_status}: a live system prompt that teaches any LLM agent the
palace layout, current statistics, and effective usage patterns.
\begin{table}[ht]
\centering
\caption{MCP tool categories and tools.}
\label{tab:mcp_tools}
\begin{tabular}{lp{8cm}}
\toprule
\textbf{Category (tools)} & \textbf{Purpose} \\
\midrule
Palace Navigation (8) & \texttt{palace\_status}, \texttt{list\_wings}, \texttt{list\_rooms},
\texttt{get\_taxonomy}, \texttt{search}, \texttt{navigate},
\texttt{find\_tunnels}, \texttt{graph\_stats} \\
Palace Operations (5) & \texttt{add\_drawer}, \texttt{delete\_drawer}, \texttt{add\_wing},
\texttt{add\_room}, \texttt{check\_duplicate} \\
Knowledge Graph (6) & \texttt{kg\_add}, \texttt{kg\_query}, \texttt{kg\_invalidate},
\texttt{kg\_timeline}, \texttt{kg\_traverse}, \texttt{kg\_contradictions} \\
Stigmergy (5) & \texttt{pheromone\_status}, \texttt{pheromone\_deposit},
\texttt{hot\_paths}, \texttt{cold\_spots}, \texttt{decay\_now} \\
Agent Diary (3) & \texttt{list\_agents}, \texttt{diary\_write}, \texttt{diary\_read} \\
System (2) & \texttt{export}, \texttt{import} \\
\bottomrule
\end{tabular}
\end{table}
An accompanying 401-line \texttt{skills/graphpalace.md} document can be dropped
into any LLM's context window to teach palace navigation, including 14 Cypher
patterns, all 28 tool descriptions with parameter tables, and 7 example
workflows.
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Storage Architecture}
\label{sec:storage}
% ─────────────────────────────────────────────────────────────────────────────
\texttt{gp-storage} defines a \texttt{StorageBackend} trait with two
implementations:
\begin{itemize}[nosep]
\item \textbf{InMemoryBackend} (default, always available) --- full CRUD and
search over Rust \texttt{HashMap}s. Covers testing, WASM deployment,
and edge devices. Search is O($N$) cosine scan; suitable to
$\approx$10\,000 drawers.
\item \textbf{KuzuBackend} (feature-gated \texttt{kuzu-ffi}) --- 313-line raw
FFI bindings to \texttt{kuzu.h} wrapped by 499-line RAII
\texttt{Database}/\texttt{Connection}/\texttt{QueryResult} types.
When connected, all operations translate to Cypher queries executed
against Kùzu, which provides HNSW vector search (O$(\log N)$), FTS, and
ACID transactions.
\end{itemize}
The trait abstraction allows the orchestrator, pathfinding, and stigmergy
subsystems to be developed and tested against \texttt{InMemoryBackend} and then
switched to Kùzu for production without code changes.
% ─────────────────────────────────────────────────────────────────────────────
\section{Implementation}
\label{sec:impl}
% ─────────────────────────────────────────────────────────────────────────────
GraphPalace is implemented as a Cargo workspace with 13 crates.
Table~\ref{tab:crates} summarises each crate, its test count, and its line
count. The workspace builds with Rust stable ($\geq$1.80); WASM requires
\texttt{wasm-pack}; Python bindings use \texttt{maturin} + PyO3.
\begin{table}[ht]
\centering
\caption{GraphPalace Rust workspace --- crate map.}
\label{tab:crates}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{llrrl}
\toprule
\textbf{Crate} & \textbf{Type} & \textbf{Tests} & \textbf{LOC} & \textbf{Purpose} \\
\midrule
\texttt{gp-core} & lib & 19 & 1,142 & Types, schema, config, errors \\
\texttt{gp-stigmergy} & lib & 95 & 1,839 & 5-type pheromones, decay, Cypher gen \\
\texttt{gp-pathfinding} & lib & 50 & 1,556 & Semantic A$^*$, cost model, provenance \\
\texttt{gp-agents} & lib & 50 & 1,160 & Active Inference, archetypes, annealing \\
\texttt{gp-swarm} & lib & 50 & 1,228 & Multi-agent coordinator, convergence \\
\texttt{gp-embeddings} & lib & 23 & 465 & Embedding trait, TF-IDF, ONNX, LRU \\
\texttt{gp-storage} & lib & 60 & 2,628 & StorageBackend, InMemory, Kùzu FFI \\
\texttt{gp-palace} & lib & 63 & 1,888 & Orchestrator, search, navigate, export \\
\texttt{gp-mcp} & lib & 84 & 2,129 & MCP server, 28 tools, PALACE\_PROTOCOL \\
\texttt{gp-wasm} & lib & 67 & 1,859 & WASM target, JS API, Web Workers \\
\texttt{gp-bench} & lib & 43 & 1,624 & Recall, pathfinding, throughput suites \\
\texttt{gp-cli} & binary & -- & 335 & CLI with 12 subcommands (clap) \\
\texttt{gp-python} & cdylib & -- & 147 & Python bindings stub (PyO3/maturin) \\
\midrule
\textbf{Total} & & \textbf{604} & \textbf{18,000} & \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Build targets.}
\begin{itemize}[nosep]
\item \texttt{cargo build --release} --- native Linux/macOS/Windows binary.
\item \texttt{wasm-pack build --target web --release} (in \texttt{gp-wasm}) ---
produces \texttt{graphpalace\_bg.wasm} + \texttt{graphpalace.js}.
\item \texttt{maturin develop} (in \texttt{gp-python}) --- installs the Python
extension with \texttt{Palace.add\_drawer()}, \texttt{.search()},
\texttt{.navigate()}.
\end{itemize}
\paragraph{Quality.}
The CI pipeline (GitHub Actions) runs: \texttt{cargo test~--workspace},
\texttt{cargo clippy~--workspace}, \texttt{cargo fmt~--check}, and the WASM
build. As of v0.1.0: 680 tests, 0 failures, 0 Clippy warnings. The full test
suite completes in under 30 seconds on a standard development machine.
% ─────────────────────────────────────────────────────────────────────────────
\section{Experimental Evaluation}
\label{sec:eval}
% ─────────────────────────────────────────────────────────────────────────────
All benchmarks were run on the \texttt{InMemoryBackend} with a
\texttt{release} build (\texttt{opt-level~=~3}, \texttt{lto~=~"thin"}),
single-threaded, on a Linux container with Rust~1.94.1. The benchmark suite
is implemented in \texttt{gp-bench}~v0.1.0 using the Criterion harness.
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Recall Benchmarks}
\label{sec:recall_bench}
% ─────────────────────────────────────────────────────────────────────────────
\paragraph{Methodology.}
A palace is populated with $N$ drawers containing unique template text.
Queries are generated in two forms: (i) \emph{exact-match} queries (identical
text to the stored drawer, 50\% of test set) and (ii) \emph{partial} queries
(first 5 words of the drawer, 50\% of test set). For each query the palace is
searched by embedding cosine similarity and the rank of the correct drawer is
recorded. Metrics: Recall@$k$ (fraction of queries where the correct drawer
appears in the top-$k$ results) and Mean Reciprocal Rank~(MRR).
\paragraph{Results.}
Table~\ref{tab:recall_mock} shows results with the \texttt{MockEmbeddingEngine}
(FNV-1a hashing). Table~\ref{tab:recall_comparison} compares embedders.
\begin{table}[ht]
\centering
\caption{Recall results with \texttt{MockEmbeddingEngine} (FNV-1a hash, 384-dim).}
\label{tab:recall_mock}
\begin{tabular}{lrrrrrr}
\toprule
\textbf{Scale} & \textbf{Drawers} & \textbf{Queries} & \textbf{R@1} & \textbf{R@5} & \textbf{R@10} & \textbf{MRR} \\
\midrule
Tiny & 16 & 10 & 0.50 & 0.50 & 0.60 & 0.517 \\
Small & 96 & 30 & 0.53 & 0.60 & 0.60 & 0.564 \\
Medium & 496 & 50 & 0.54 & 0.54 & 0.54 & 0.540 \\
Large & 1,000 & 80 & 0.55 & 0.55 & 0.55 & 0.550 \\
XLarge & 5,000 & 100 & 0.55 & 0.55 & 0.55 & 0.550 \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Analysis of the mock recall gap.}
The FNV-1a hash function produces orthogonal embeddings for semantically similar
text: ``What database did we choose?'' and ``We decided to use Postgres'' receive
unrelated embeddings. Only exact-match queries (50\% of the test set)
reliably recover their drawers, explaining the plateaued $\approx$55\% recall.
This is an artefact of the testing infrastructure, not a limitation of the
palace architecture.
\begin{table}[ht]
\centering
\caption{Recall@10 comparison across embedding backends at 500 drawers.}
\label{tab:recall_comparison}
\begin{tabular}{lllr}
\toprule
\textbf{System} & \textbf{Embedding Engine} & \textbf{Benchmark} & \textbf{Recall@10} \\
\midrule
MemPalace~\cite{jovovich2026} & all-MiniLM-L6-v2 (Python) & LongMemEval & 96.6\% \\
GraphPalace (Mock) & FNV-1a hash (no semantics) & gp-bench & 54.0\% \\
GraphPalace (TF-IDF) & TF-IDF + sparse projection & gp-bench & \textbf{96--100\%} \\
GraphPalace (ONNX, projected) & all-MiniLM-L6-v2 (ONNX) & projected & $\approx$96--97\% \\
Random baseline & N/A & theoretical & $\approx$2\% \\
\bottomrule
\end{tabular}
\end{table}
The TF-IDF backend (pure Rust, no model download) already achieves 96--100\%
recall@10 on the structured text in the benchmark corpus. Enabling the ONNX
backend with all-MiniLM-L6-v2 is expected to match MemPalace's 96.6\% because
both systems use the same underlying model; pheromone boosting may further
improve recall on frequently-accessed drawers.
% ─────────────────────────────────────────────────────────────────────────────
\subsection{Pathfinding Benchmarks}
\label{sec:pathfind_bench}
% ─────────────────────────────────────────────────────────────────────────────
\paragraph{Methodology.}
Palace graphs are generated with $W$ wings, $R$ rooms per wing, 1 closet per
room, and 2 drawers per closet, connected by Hall edges within wings and Tunnel
edges between wing-zero rooms. Semantic~A$^*$ is run with default weights
(40/30/30). We measure success rate, mean latency ($\mu$s), mean iterations,
and mean path length. Three scenarios are evaluated: random node pairs,
same-wing pairs (two rooms in the same wing), and cross-wing pairs (two rooms
in different wings via tunnels).
\paragraph{General pathfinding.}
Table~\ref{tab:astar_general} reports results for random node pairs.
\begin{table}[ht]
\centering
\caption{Semantic A$^*$ on random node pairs.}
\label{tab:astar_general}
\begin{tabular}{lrrrrrr}
\toprule
\textbf{Scale} & \textbf{Nodes} & \textbf{Trials} & \textbf{Success\%} & \textbf{Avg $\mu$s} & \textbf{Avg Iters} & \textbf{Path Len} \\
\midrule
$2W \times 4R$ & 27 & 50 & 32.0 & 9.8 & 9.8 & 3.8 \\
$4W \times 6R$ & 97 & 100 & 25.0 & 26.1 & 43.3 & 6.9 \\
$6W \times 8R$ & 195 & 200 & 27.0 & 49.3 & 83.6 & 9.1 \\
$8W \times 10R$ & 321 & 300 & 27.0 & 84.8 & 152.1 & 12.0 \\
\bottomrule
\end{tabular}
\end{table}
The 25--32\% success rate on random pairs is expected: the palace is a
hierarchical, not fully connected, graph. Random node selection frequently
produces pairs (e.g.\ Drawer in Wing~A, Closet in Wing~B) for which no
direct A$^*$ path exists without full hierarchy traversal.
\paragraph{Same-wing pathfinding.}
Table~\ref{tab:astar_samewing} shows results for room-to-room paths within a
single wing.
\begin{table}[ht]
\centering
\caption{Semantic A$^*$ on same-wing room pairs (connected via Hall edges).}
\label{tab:astar_samewing}
\begin{tabular}{lrrrrr}
\toprule
\textbf{Scale} & \textbf{Trials} & \textbf{Success\%} & \textbf{Avg $\mu$s} & \textbf{Avg Iters} & \textbf{Path Len} \\
\midrule
$2W \times 4R$ & 50 & \textbf{100.0} & 8.1 & 3.4 & 2.5 \\
$4W \times 6R$ & 100 & \textbf{100.0} & 10.3 & 4.2 & 2.6 \\
$6W \times 8R$ & 200 & \textbf{100.0} & 21.2 & 4.8 & 2.8 \\
$8W \times 10R$ & 300 & \textbf{100.0} & 10.9 & 5.0 & 2.8 \\
\bottomrule
\end{tabular}
\end{table}
Same-wing A$^*$ achieves \textbf{100\% success} across all scales at
\textbf{8--21~$\mu$s} --- more than 10,000$\times$ under the \textless200~ms
target derived from STAN\_X~v8's 211~ms cached pathfinding.
\paragraph{Cross-wing pathfinding.}
Table~\ref{tab:astar_crosswing} reports results for room-to-room paths across
different wings, using Tunnel edges.
\begin{table}[ht]
\centering
\caption{Semantic A$^*$ on cross-wing room pairs (via Tunnel edges).}
\label{tab:astar_crosswing}
\begin{tabular}{lrrrrr}
\toprule
\textbf{Scale} & \textbf{Trials} & \textbf{Success\%} & \textbf{Avg $\mu$s} & \textbf{Avg Iters} & \textbf{Path Len} \\
\midrule
$2W \times 4R$ & 50 & \textbf{100.0} & 5.2 & 2.0 & 2.0 \\
$4W \times 6R$ & 100 & \textbf{100.0} & 7.6 & 2.0 & 2.0 \\
$6W \times 8R$ & 200 & \textbf{100.0} & 10.0 & 2.0 & 2.0 \\
$8W \times 10R$ & 300 & \textbf{100.0} & 13.0 & 2.0 & 2.0 \\
\bottomrule
\end{tabular}
\end{table}
Cross-wing A$^*$ achieves \textbf{100\% success} at \textbf{5--13~$\mu$s},
completing in exactly 2 iterations: the algorithm finds the Tunnel in one step
and reaches the goal in the next. This is 38,000$\times$ under the
\textless500~ms target. Table~\ref{tab:astar_vs_stanx} compares against
STAN\_X~v8.
\begin{table}[ht]
\centering
\caption{A$^*$ comparison: GraphPalace vs.\ STAN\_X~v8.}
\label{tab:astar_vs_stanx}
\begin{tabular}{lccc}
\toprule
\textbf{Metric} & \textbf{STAN\_X~v8} & \textbf{GraphPalace} & \textbf{Winner} \\
\midrule
A$^*$ success rate & 90.9\% & \textbf{100\%} (structured) & \textbf{GP} \\
A$^*$ latency (cached) & 211\,ms & \textbf{8--21\,$\mu$s} & \textbf{GP} ($10{,}000\times$) \\
A$^*$ latency (uncached) & 494\,ms & \textbf{26--85\,$\mu$s} & \textbf{GP} ($5{,}800\times$) \\
Cost model & 40/30/30 & 40/30/30 (same) & Tie \\
Pheromone types & 5 & 5 (same) & Tie \\
\bottomrule
\end{tabular}
\end{table}
The latency difference arises from the graph structure: STAN\_X operates over
a dense RDF graph with millions of triples; GraphPalace's palace hierarchy