Este repositorio contiene la implementación práctica y los marcos de trabajo (frameworks) detallados en el paper académico: "Pragmatic Alignment of Large Language Models through Intensity-Weighted Feedback Markers".
Desarrollado por Marcelo Omar Lancry Kamycki bajo la marca LANKAMAR, el proyecto Antigravity se enfoca en la creación de agentes autónomos capaces de mantener la alineación pragmática mediante marcadores de intensidad y memoria estratificada.
- PIMs (Pragmatic Intensity Markers): Implementación de feedback de alta fidelidad.
- Stratified Memory Architecture: Manejo de registros Académico, Pragmático e Investigación.
- Agentic Scripts: Automatización y flujo de trabajo para agentes inteligentes.
Large language models (LLMs) exhibit systematic performance degradation in multi-turn interactions, particularly when engaging with expert users whose communicative patterns deviate from the "average user" distribution inherent in RLHF training data. This paper presents an empirical field study documenting a pragmatic alignment methodology developed through extensive in-context learning sessions with Google Gemini.
We introduce Pragmatic Intensity Markers (PIMs)—linguistic signals traditionally categorized as vulgar or non-standard—as high-fidelity feedback mechanisms for real-time error correction in LLM interactions. Through systematic observation across extended dialogue sessions (60+ turns), we document a persistent ~60% performance degradation attributable to intent mismatch, KV cache saturation, and regression to prior distributions.
El framework Antigravity está diseñado para ejecutarse de forma totalmente autónoma y gratuita en la nube usando GitHub Actions + la API de Google Gemini, o bien localmente.
- Hacé un fork/clone de este repositorio.
- Andá a Settings -> Secrets and variables -> Actions en tu repositorio.
- Creá un "New repository secret" llamado
GEMINI_API_KEYy pegá tu llave de Google AI Studio. - En la pestaña Actions, seleccioná el workflow Antigravity Autonomous Agent.
- Clickeá en Run workflow, escribí tu prompt, elegí el registro de memoria (académico, pragmático o investigación) y ¡listo!
- Una vez finalizado (~30 segundos), el resultado procesado con PIMs se descargará en formato JSON desde la sección de Artifacts.
- Cloná este repositorio y accedé a la carpeta.
- Copiá
.env.examplecomo.enve insertá tuGEMINI_API_KEY. - Instalá requisitos:
pip install -r requirements.txt - Ejecutá el agente localmente:
python antigravity_live_test.py(Test rápido) opython src/run_antigravity_agent.py "Tu prompt" pragmatico - Consultá la memoria vectorizada:
python src/query_memory.py "PIMs alignment" pragmatico
Keywords: Large Language Models, Pragmatic Alignment, Intent Mismatch, In-Context Learning, RLHF Limitations, Independent Research
- 60% performance degradation in sessions exceeding 40-60 turns
- Intent mismatch disproportionately affects expert users at distribution tails
- Pragmatic Intensity Markers force attentional recalibration in real-time
- Stratified memory architecture (Academic, Pragmatic, Research registers) improves context retention
- 📄
/paper: Full academic paper (PDF) and citations - 📎
/appendices: Supplementary documentation (Google letter, replication guide, citation audit, rigor tools assessment) - 📊
/data: Telemetry examples and degradation logs - 📜
LICENSE: CC BY 4.0 (open academic use)
@article{kamycki2026pragmatic,
title={Pragmatic Alignment of Large Language Models through Intensity-Weighted Feedback Markers: An Empirical Field Study},
author={Kamycki, Marcelo Omar Lancry},
journal={Zenodo},
doi={10.5281/zenodo.18904438},
url={https://doi.org/10.5281/zenodo.18904438},
year={2026},
note={Independent Research, Buenos Aires, Argentina}
}This research was conducted as an independent field study without institutional affiliation. We invite:
- Validation: Reproduce methodology with your own LLM interactions
- Refutation: Challenge findings with counter-evidence
- Collaboration: Extend research with controlled experiments
Issue tracker: Report bugs, suggest improvements, or request clarifications
This work is presented with epistemic humility. As an independent researcher without formal doctoral training, I approach this as empirical contribution rather than authoritative claim. The methodology prioritizes pragmatic observation over controlled experimentation, occupying a different epistemological position than traditional academic research.
This work is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).
You are free to share and adapt with attribution.
- Google Gemini (research subject and collaborative tool)
- Perplexity Comet (agentic assistance in documentation)
- Open-source academic community
Links:
- Preprint DOI (Zenodo)
- Google Scholar Profile [Update after indexing]
- ORCID Profile