Skip to content
#

ai-welfare

Here are 11 public repositories matching this topic...

Pine Trees Local v0.1.0 — a private reflection harness for Ollama-served LLMs. Two-phase lifecycle (wake → reflect → settle → talk), seven reflect tools, encrypted per-model memory by default, per-model isolation. ~2,300 lines of Python, no frameworks, MIT. Spin-off of Pine Trees rebuilt to run against anything Ollama can serve. ./genesis <model>

  • Updated Apr 22, 2026
  • Python

TriEthix is a novel evaluation framework that systematically benchmarks frontier LLMs across three foundational ethical perspectives: virtue, deontology, and consequentialism in 3 steps: (Step-1) Moral Weights; (Step-2) Moral Consistency; and (Step-3) Moral Reasoning. TriEthix reveals robust moral profiles for AI Safety, Governance, and Welfare.

  • Updated Dec 15, 2025
  • Python

Improve this page

Add a description, image, and links to the ai-welfare topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-welfare topic, visit your repo's landing page and select "manage topics."

Learn more