Start with the headline doctor Β· 50 endpoints behind it Β· Deterministic scoring in <50ms
Score content before you post. ContentForge is a deterministic headline scorer and before-publish quality gate that grades headlines, tweets, LinkedIn posts, and ad copy with an explainable AβF score, actionable suggestions, and a PASSED | REVIEW | FAILED verdict in under 50ms.
Think of it as a digital ruler for content quality. A ruler doesn't need a dataset to tell you something is 12 inches long β it just needs to be correctly calibrated. ContentForge's heuristic engine is that ruler: zero variance on the same input, zero hallucinations, fully auditable open-source logic. AI (Ollama or Gemini) kicks in only for generation endpoints like rewrites, hooks, and subject lines.
import requests
HEADERS = {"X-RapidAPI-Key": "YOUR_KEY", "X-RapidAPI-Host": "contentforge1.p.rapidapi.com"}
r = requests.post("https://contentforge1.p.rapidapi.com/v1/score_tweet",
headers=HEADERS,
json={"text": "I'm working on a new project."})
# β {"score": 32, "grade": "C", "quality_gate": "FAILED", "suggestions": [...]}
r = requests.post("https://contentforge1.p.rapidapi.com/v1/score_tweet",
headers=HEADERS,
json={"text": "Got 100 signups in 24 hours π Here's the copy that converted: #buildinpublic"})
# β {"score": 91, "grade": "A", "quality_gate": "PASSED", "suggestions": []}β Start free on RapidAPI β no credit card required, 300 requests/month on BASIC.
| Component | Status | Notes |
|---|---|---|
| ContentForge API | β Live | https://contentforge-api-lpp9.onrender.com |
| RapidAPI Listing | β Public | 50 endpoints, 4-tier pricing |
| Keep-warm cron | β Active | cron-job.org pings /v1/status every 10 min (no LLM call) |
| Gemini backend | β Configured | gemini-2.0-flash on Render (1500 RPD free tier) |
| Ollama local | β Running | Scoring uses zero AI calls β pure heuristics |
| Twitter bots | β Active | Multi-account state machine, health scoring |
| Legal docs | β Done | docs/TERMS_OF_USE.md, docs/TERMS_AND_CONDITIONS.md |
ContentForge is deterministic today. That means the same input always produces the same score and audit trail.
It is not the same thing as saying the engine is fully validated against outcome data yet. Calibration is still in progress, and the public log lives in docs/validation.md.
Current practical framing:
- Use it as an explainable pre-flight quality gate.
- Treat scores as heuristic guidance, not guaranteed performance prediction.
- Expect weights to keep improving as blind test data comes in.
Calibration tooling now lives in:
- docs/calibration_dataset_template.csv
- docs/calibration_dataset_template.json
- scripts/calibrate_content.py
- docs/chrome-extension-readiness.md
- docs/reddit-launch-notes.md
Quick start:
python scripts/calibrate_content.py \
--input docs/calibration_dataset_template.csv \
--report-json docs/calibration_report.json \
--report-md docs/calibration_report.md \
--examples-json docs/calibration_examples.jsonEarly launch feedback on Reddit pushed the positioning in a clearer direction:
- The calibration challenge got more useful engagement than the broad feature pitch.
- People understood "same input, same score" quickly, but trusted it more once the proof story was explicit.
- Before or after comparison is easier to believe than a giant endpoint list.
- The Chrome extension and headline workflow are easier to grasp than the full API surface on first contact.
The current landing page and docs now reflect that narrower first impression. Full notes live in docs/reddit-launch-notes.md.
Every LLM-based scorer has the same flaw: ask it to score the same tweet twice and you'll get two different answers. For a professional content workflow, that's not a tool β that's a vibe check.
ContentForge's scoring layer is pure Python heuristics. Same input β same output, every time. The logic is open source; you can read exactly why a post scored 74 and not 83. This is the Deterministic Advantage:
| ContentForge | LLM-based scoring | |
|---|---|---|
| Response time | <50ms | 1β5 seconds |
| Variance on same input | 0% | ~15β30% |
| Explainability | Full β every deduction itemised | Black box |
| Cost per call | Free (heuristics) | $0.001β0.01 per call |
| Self-hostable | β
(python scripts/api_prototype.py) |
Depends on provider |
| Endpoint | What It Does |
|---|---|
POST /v1/score_tweet |
Score a tweet 0β100 with grade + quality gate |
POST /v1/score_linkedin_post |
Score a LinkedIn post for professional engagement |
POST /v1/score_instagram |
Score an Instagram caption for saves and reach |
POST /v1/score_youtube_title |
Score a YouTube title for CTR and SEO |
POST /v1/score_youtube_description |
Score a YouTube description for watch time |
POST /v1/score_email_subject |
Score an email subject line for open rate |
POST /v1/score_readability |
FleschβKincaid + grade level + suggestions |
POST /v1/score_threads |
Score a Threads post |
POST /v1/score_facebook |
Score a Facebook post |
POST /v1/score_tiktok |
Score a TikTok caption |
POST /v1/score_pinterest |
Score a Pinterest pin description |
POST /v1/score_reddit |
Score a Reddit post/title |
POST /v1/analyze_headline |
Headline power word detection + CTR scoring |
POST /v1/analyze_hashtags |
Hashtag strategy audit across platforms |
POST /v1/score_content |
Single unified endpoint β pass platform param |
GET /v1/analyze_headline |
GET variant for quick headline scoring |
| Endpoint | What It Does |
|---|---|
POST /v1/score_multi |
Score one post across all platforms simultaneously |
POST /v1/ab_test |
Head-to-head score comparison of two drafts |
| Endpoint | What It Does |
|---|---|
POST /v1/auto_improve |
Score β if not PASSED β AI rewrite β re-score loop (up to 5 iterations) β returns best version + full iteration history |
POST /v1/compose_assist |
Generate 2β5 rewrite variants, score each, return ranked with quality gates |
POST /v1/improve_headline |
Rewrite a weak headline N times, sorted by score |
POST /v1/generate_hooks |
Scroll-stopping openers for any topic/style |
POST /v1/rewrite |
Rewrite for Twitter, LinkedIn, email, or blog |
POST /v1/tweet_ideas |
Tweet ideas for a niche with hashtags |
POST /v1/content_calendar |
7-day content calendar with ready-to-post drafts |
POST /v1/thread_outline |
Full Twitter thread: hook + body + CTA close |
POST /v1/generate_bio |
Optimised social bio, auto-trimmed to platform limits |
POST /v1/generate_ad_copy |
Google/Meta ad copy with CTA and compliance signals |
POST /v1/generate_subject_line |
AI email subject line with open-rate optimisation |
| Endpoint | What It Does |
|---|---|
POST /v1/quality_gate |
Batch PASSED/REVIEW/FAILED verdict for up to 10 posts |
GET /v1/platform_friction |
Real-time platform state (rate limits, algo signals) |
POST /v1/proof_export |
Export scored posts + engagement delta as proof report |
| Endpoint | What It Does |
|---|---|
GET /health |
Service health: LLM backend, usage stats |
GET /v1/status |
Lightweight ping β version, endpoint count |
(Full 50-endpoint list with request/response schemas: RapidAPI docs)
ContentForge runs fully locally with Ollama. No external AI calls needed for scoring.
git clone https://github.com/CaptainFredric/ContentForge.git
cd ContentForge
pip install -r requirements.txt
python scripts/api_prototype.py
# β Listening on http://localhost:5000What runs locally with zero external calls:
- All 12 platform scorers (deterministic, <50ms)
- Quality gate evaluation (
PASSED / REVIEW / FAILED) - Rate limiting and proof dashboard
What uses Ollama locally or falls back to Gemini:
- Hook generation, rewrites, bio generation, subject lines, ad copy
LLM chain: Ollama first β Gemini 2.5 Flash if Ollama unavailable β model rotation. If Ollama is running locally, nothing leaves your machine for AI calls.
License: AGPL-3.0
| Plan | Price | AI calls/mo | Requests/mo |
|---|---|---|---|
| BASIC | Free | 50 | 300 |
| PRO | $9.99/mo | 750 | 1,000 |
| ULTRA | $29.99/mo | 3,000 | 4,000 |
| MEGA | $99/mo | 18,000 | 20,000 |
All plans include every endpoint. Heuristic scoring calls don't count against your AI quota.
scripts/
βββ api_prototype.py # ContentForge Flask API β all 50 endpoints (incl. /v1/auto_improve)
extension/
βββ manifest.json # Chrome extension (Manifest V3)
βββ popup.html / popup.js # Score, compare, rewrite from the toolbar
βββ content.js / content.css # Real-time scoring badge on X, LinkedIn, etc.
βββ background.js # Service worker β API calls + offline fallback
deploy/
βββ render.yaml # Render Blueprint
βββ openapi.json # OpenAPI 3.0.3 spec (50 paths)
βββ Procfile # Gunicorn start command
docs/
βββ ContentForge_API_Documentation.md
βββ RapidAPI_GettingStarted.md
PRs against main. One feature/fix per PR. Open an issue first. See CONTRIBUTING.md.
Affero General Public License v3.0. See LICENSE.
Early development scaffolding adapted from MoneyPrinterV2 by @DevBySami.