Repository: https://github.com/chirindaopensource/geometric_dynamics_consumer_credit_cycles
Owner: 2025 Craig Chirinda (Open Source Projects)
This repository contains an independent, professional-grade Python implementation of the research methodology from the 2025 paper entitled "Geometric Dynamics of Consumer Credit Cycles: A Multivector-based Linear-Attention Framework for Explanatory Economic Analysis" by:
- Agus Sudjianto
- Sandi Setiawan
The project provides a complete, end-to-end computational framework for replicating the paper's findings. It delivers a modular, auditable, and extensible pipeline that executes the entire research workflow: from rigorous data validation and preprocessing to model training, post-hoc attribution analysis, and the generation of all final diagnostic reports.
- Introduction
- Theoretical Background
- Features
- Methodology Implemented
- Core Components (Notebook Structure)
- Key Callable:
execute_geometric_credit_cycle_research - Prerequisites
- Installation
- Input Data Structure
- Usage
- Output Structure
- Project Structure
- Customization
- Contributing
- Recommended Extensions
- License
- Citation
- Acknowledgments
This project provides a Python implementation of the methodologies presented in the 2025 paper "Geometric Dynamics of Consumer Credit Cycles." The core of this repository is the iPython Notebook geometric_dynamics_consumer_credit_cycles_draft.ipynb, which contains a comprehensive suite of functions to replicate the paper's findings, from initial data validation to the final generation of all analytical tables and figures.
The paper introduces a novel framework using Geometric Algebra and Linear Attention to move beyond traditional correlation-based analysis of economic cycles. This codebase operationalizes the framework, allowing users to:
- Rigorously validate and manage the entire experimental configuration via a single
config.yamlfile. - Process raw quarterly macroeconomic data through a causally pure pipeline, including growth transformations and rolling-window standardization.
- Construct multivector embeddings that explicitly model the rotational (feedback) dynamics between variables.
- Train the Linear Attention model using the paper's specified chronological, single-step update algorithm.
- Generate a full suite of diagnostic and interpretability outputs, including temporal attribution, geometric component attribution, and PCA-based regime trajectory plots.
- Conduct systematic robustness analysis through automated hyperparameter sweeps.
The implemented methods are grounded in Geometric (Clifford) Algebra, deep learning, and econometric time series analysis.
1. Geometric Algebra (GA) Embedding:
The core innovation is representing the economic state not as a simple vector, but as a multivector in a Clifford Algebra. The geometric product of two vectors a and b decomposes their relationship into a scalar (projective) part and a bivector (rotational) part:
$$
ab = a \cdot b + a \wedge b
$$
This project implements the multivector embedding from Equation (3) of the paper, which includes scalar, vector, and bivector components. The bivector terms, such as (x_{u,t} - x_{c,t})(e_u \wedge e_c), are designed to activate when variables diverge, capturing the "tension" that drives feedback spirals.
2. Linear Attention Mechanism:
The model uses Linear Attention to identify relevant historical precedents. The attended context vector O_t is computed as a weighted average of past information, where the weights are determined by the geometric similarity between the current state (query Q_t) and historical states (keys K_τ). The key equations implemented are (8), (9), and (10):
$$
S_t = \sum_{\tau \in \mathcal{W}t} K\tau V_\tau^\top, \quad Z_t = \sum_{\tau \in \mathcal{W}t} K\tau
$$
$$
O_t = \frac{Q_t^\top S_t}{Q_t^\top Z_t + \varepsilon}
$$
The similarity Q_t^T K_τ is a multivector-aware dot product, allowing the model to match not just variable levels but the underlying geometric interaction patterns.
The provided iPython Notebook (geometric_dynamics_consumer_credit_cycles_draft.ipynb) implements the full research pipeline, including:
- Modular, Multi-Task Architecture: The entire pipeline is broken down into 23 distinct, modular tasks, each with its own orchestrator function, ensuring clarity, testability, and rigor.
- Configuration-Driven Design: All study parameters are managed in an external
config.yamlfile, allowing for easy customization and replication without code changes. - Causally Pure Data Pipeline: Implements professional-grade time series preprocessing, including causally correct rolling-window operations and transformations, with a
valid_masksystem to prevent any look-ahead bias. - High-Fidelity Model Implementation: Includes a complete, from-scratch implementation of the multivector embedding, the custom shifted Leaky ReLU, and the chronological
batch_size=1training loop as specified in the paper. - Comprehensive Interpretability Suite: Provides functions to generate all key analytical outputs from the paper, including temporal attention heatmaps, geometric occlusion attribution, component magnitude plots, and PCA trajectory analysis.
- Automated Robustness Analysis: Includes a top-level function to automatically conduct hyperparameter sweeps, running the entire pipeline for each configuration and compiling the results.
- Automated Reporting and Archival: Concludes by automatically generating all publication-ready plots, summary tables, and a complete, timestamped archive of all data, parameters, results, and environment metadata for perfect reproducibility.
The core analytical steps directly implement the methodology from the paper:
- Validation (Tasks 1-2): Ingests and rigorously validates the
config.yamland the rawpd.DataFrameagainst a strict schema. - Data Preprocessing (Tasks 3-6): Cleanses the data, applies growth transformations, performs rolling-window standardization, and constructs the final
(T, 11)multivector embedding matrix. - Model Setup (Tasks 7-8): Initializes all learnable parameters with best-practice schemes (Kaiming/Xavier) and defines the custom activation function.
- Training (Tasks 9-16): Implements the full forward pass (QKV projections, attention statistics, context vector, MLP head), computes the prediction and regularization losses, and executes the chronological, per-time-step training loop to produce the final trained parameters.
- Post-Hoc Analysis (Tasks 17-20): Uses the trained model to compute temporal attributions, geometric attributions, component magnitudes, and the PCA trajectory of the system's state.
- Master Orchestration (Tasks 21-23): Provides top-level functions to run the entire pipeline, conduct robustness sweeps, and generate all final deliverables.
The geometric_dynamics_consumer_credit_cycles_draft.ipynb notebook is structured as a logical pipeline with modular orchestrator functions for each of the major tasks. All functions are self-contained, fully documented with type hints and docstrings, and designed for professional-grade execution.
The project is designed around a single, top-level user-facing interface function:
execute_geometric_credit_cycle_research: This master orchestrator function, located in the final section of the notebook, runs the entire automated research pipeline from end-to-end. A single call to this function reproduces the entire computational portion of the project, from data validation to the final report generation and archival.
- Python 3.9+
- Core dependencies:
pandas,numpy,torch,matplotlib,seaborn,pyyaml,tqdm.
-
Clone the repository:
git clone https://github.com/chirindaopensource/geometric_dynamics_consumer_credit_cycles.git cd geometric_dynamics_consumer_credit_cycles -
Create and activate a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install Python dependencies:
pip install pandas numpy torch matplotlib seaborn pyyaml tqdm
The pipeline requires a pandas.DataFrame with a specific schema, as generated in the "Usage" example. All other parameters are controlled by the config.yaml file.
The geometric_dynamics_consumer_credit_cycles_draft.ipynb notebook provides a complete, step-by-step guide. The primary workflow is to execute the final cell of the notebook, which demonstrates how to use the top-level execute_geometric_credit_cycle_research orchestrator:
# Final cell of the notebook
# This block serves as the main entry point for the entire project.
if __name__ == '__main__':
# --- 1. Generate/Load Inputs ---
# A synthetic data generator is included in the notebook for demonstration.
# In a real use case, you would load your data here.
# consolidated_df_raw = pd.read_csv(...)
# Load the model configuration from the YAML file.
with open('config.yaml', 'r') as f:
model_config = yaml.safe_load(f)
# Define the hyperparameter grid for robustness analysis.
hyperparameter_grid = {
'hidden_dimension_dh': [32, 64],
'learning_rate_eta': [1e-4, 5e-5]
}
# --- 2. Execute Pipeline ---
# Define the top-level directory for all outputs.
RESULTS_DIRECTORY = "research_output"
# Execute the entire research study.
final_results = execute_geometric_credit_cycle_research(
consolidated_df_raw=consolidated_df_raw, # Assumes this df is generated/loaded
model_config=model_config,
hyperparameter_grid=hyperparameter_grid,
save_dir=RESULTS_DIRECTORY,
base_random_seed=42,
run_robustness_analysis=True,
show_plots=True
)The pipeline creates a save_dir with a highly structured set of outputs. A unique timestamped subdirectory is created for the primary run (e.g., analysis_run_20251027_103000/), containing:
historical_fit.pnganddiagnostic_dashboard.png: Publication-quality plots.regime_summary_table.csv: The data-driven summary of economic regimes.full_results.pkl: A complete archive of the primary run's results.trained_parameters.pth: The final PyTorch model parameters.environment.json: A record of the computational environment.
If robustness analysis is run, a top-level file robustness_analysis_full_results.pkl is also saved, containing the results from every run in the hyperparameter sweep.
geometric_dynamics_consumer_credit_cycles/
│
├── geometric_dynamics_consumer_credit_cycles_draft.ipynb # Main implementation notebook
├── acquire_and_clean_fred_data.py # Script to download and preprocess data from FRED
├── config.yaml # Master configuration file
├── requirements.txt # Python package dependencies
│
├── data/
│ └── consolidated_df_raw.csv # Pre-generated raw dataset for the study
│
├── research_output/ # Example output directory
│ ├── analysis_run_20251027_103000/
│ │ ├── historical_fit.png
│ │ ├── diagnostic_dashboard.png
│ │ ├── regime_summary_table.csv
│ │ ├── full_results.pkl
│ │ ├── trained_parameters.pth
│ │ └── environment.json
│ │
│ └── robustness_analysis_full_results.pkl
│
├── LICENSE # MIT Project License File
└── README.md # Project README file (This File)
The pipeline is highly customizable via the config.yaml file. Users can easily modify all study parameters, including lookback horizons, model dimensions, activation function parameters, and regularization strengths, without altering the core Python code.
Contributions are welcome. Please fork the repository, create a feature branch, and submit a pull request with a clear description of your changes. Adherence to PEP 8, type hinting, and comprehensive docstrings is required.
Future extensions could include:
- Alternative Geometries: Exploring other geometric algebras (e.g., Projective Geometric Algebra) or differential geometry frameworks (e.g., Riemannian manifolds) to model economic state spaces.
- GPU Acceleration: While the current implementation is efficient, the chronological training loop could be further optimized or parallelized for GPUs for very large datasets or extensive hyperparameter searches.
- Alternative Attention Mechanisms: Integrating other efficient attention mechanisms (e.g., Performers, Transformers-are-RNNs) and comparing their diagnostic outputs.
- Formal Backtesting: Extending the framework to a formal out-of-sample forecasting or trading strategy backtest to quantify the economic value of the geometric signals.
This project is licensed under the MIT License.
If you use this code or the methodology in your research, please cite the original paper:
@article{sudjianto2025geometric,
title = {Geometric Dynamics of Consumer Credit Cycles: A Multivector-based Linear-Attention Framework for Explanatory Economic Analysis},
author = {Sudjianto, Agus and Setiawan, Sandi},
journal = {arXiv preprint arXiv:2510.15892},
year = {2025}
}For the implementation itself, you may cite this repository:
Chirinda, C. (2025). A Professional-Grade Implementation of the Geometric Dynamics Framework.
GitHub repository: https://github.com/chirindaopensource/geometric_dynamics_consumer_credit_cycles
- Credit to Agus Sudjianto and Sandi Setiawan for the foundational research that forms the entire basis for this computational replication.
- This project is built upon the exceptional tools provided by the open-source community. Sincere thanks to the developers of the scientific Python ecosystem, including PyTorch, Pandas, NumPy, Matplotlib, Seaborn, and Jupyter.
--
This README was generated based on the structure and content of the geometric_dynamics_consumer_credit_cycles_draft.ipynb notebook and follows best practices for research software documentation.