Note that this is not a performance comparison of different Caveman skills (in a coding context) currently out there, just a collection of different agent files in one place for a simpler "visual" comparison, if you will. In addition, there are some personal edits included.
I might add some thoughts on performance later, but for now, very simply and subjectively put: they all work one way or another, and users have the choice of deciding which to go with.
Three different versions of Caveman from three different repos. You will find them in these folders:
Folder: original-skill.md-files — as the name implies, the unedited files; names hint at the original source.
- caveman.md is from the Caveman repo, short, only one of several files, but the most important one. https://github.com/JuliusBrussee/caveman
- caveman-skill.md seems to combine several (all/not all?) files into one shorter file, as is done via several files in Cavemen repo https://github.com/Shawnchee/caveman-skill
- caveman-distillate.md is from the caveman-distillate repo. Only one skill file, the shortest one. https://github.com/dlepold/caveman-distillate
Folder: shortened-skill.md-files — as the name implies, edited versions.
"edit/shortened" means I slightly edited them to use even fewer tokens (fewer line breaks, or removed non-English references; you know, every token counts... Perhaps overkill? Especially the description of the skill was shortened, mainly for easier reading within a coding agent).
Finally:
- Caveman-compression: not a skill for coding, though should be mentioned: https://github.com/wilpel/caveman-compression
The Caveman repo is the most powerful one in terms of multiple skills installed and Pythonic pipelines. Is it "necessary" for everyone, e.g., solo developers? Perhaps not; you decide.
A somewhat different, but related repo of note is caveman-compression, which is more intended for general LLM usage, not (necessarily) coding. It's a Python pipeline for document and prompt pre-processing, if I am not mistaken.
What is interesting about Caveman Compression is that it was published with a research article already back in 2025, before the whole recent token-saving hype started: Zenodo. This, in turn, was, as the author states, inspired by another movement called TOON.
For licensing, see the original repos.