tvplot uses a third-party LLM — either Claude (Anthropic) or GPT (OpenAI), whichever you configure — to analyse TV-episode synopses. This page is the single source of truth for what that means in practice, who sees what, and which policies apply.
Publisher: Alpine Animation (Switzerland) · Contact: tvplot@alpineanimation.ch or open an issue at https://github.com/BirdInTheTree/tvplot/issues
The library and the standalone HTML viewer both run on the user's own machine. Neither talks to any server operated by Alpine Animation.
| What you do | What happens |
|---|---|
Run tvplot run my-show/ or the HTML viewer locally |
Your episode synopses, your analysis system choice, and your API key travel directly from your process (terminal or browser) to the LLM provider you picked |
| Results arrive | They are written to disk (CLI) or rendered in your browser (viewer) — nothing is uploaded anywhere |
| You export JSON / CSV / FDX / PDF | Files are saved locally; nothing leaves your machine |
Alpine Animation collects no data at all. No analytics, no telemetry, no server-side logs — there is no server to log to. Your API key lives only in your own environment or in your browser's localStorage.
Whichever provider you pick, the LLM vendor handles the request under their terms:
- Anthropic — https://www.anthropic.com/privacy · https://www.anthropic.com/legal/aup
- OpenAI — https://openai.com/policies/privacy-policy · https://openai.com/policies/usage-policies/
Both providers are US-based; if you pick one, the cross-border transfer of your input is between your machine and them, not through us.
Outputs produced by tvplot are generated by large language models. They can contain factual errors, invented plot details that sound plausible, and misreadings of intent. Treat the results as a starting point for your own analysis, not a verified source.
Downstream applications that surface these outputs to other users are responsible for displaying an equivalent notice (EU AI Act Art. 50; Anthropic AUP; OpenAI Usage Policies). The HTML viewer shipped with this repo carries that notice in its footer.
Because Alpine Animation holds no personal data about you, the usual access / rectification / deletion requests don't have anything to act on here. If Anthropic or OpenAI hold data from your use of their API, those requests go to them, not to us. We'll still help route a request to the right place — write to tvplot@alpineanimation.ch (or open a GitHub issue) and we'll point you at the contact channel.
If you believe tvplot itself — the code, the prompts, the docs — handles data in a way this page doesn't describe, that's a bug: please open an issue so we can either fix the code or fix the page.
Use case: structural analysis of fictional TV-episode synopses — extracting plotlines, characters, narrative functions, arc shape. Output is structured JSON and a grid visualisation.
Three input paths are supported, all feeding the same pipeline:
- User-supplied
.txtsynopses (CLI:tvplot run my-show/). - Wikipedia/Fandom-scraped raw episode descriptions rewritten into full synopses by the LLM (CLI:
tvplot write-synopses "Show Name"). - LLM-generated from its own training memory — the HTML viewer's "Analyze a series" flow asks the configured LLM to produce episode-by-episode synopses for a public show from what it already knows, then lets the user edit them before the pipeline runs.
In paths 2 and 3 the synopsis text itself is AI-generated. The viewer's footer makes the AI involvement explicit; users of path 2 (CLI) can inspect the generated .txt files before running analysis.
This use case falls under general content analysis and is not one of the prohibited categories listed in either vendor policy:
- Not weapons-related, not surveillance-of-individuals, not medical or legal advice, not CSAM, not political persuasion, not automated decisions with legal effect on individuals.
- Not deception about AI identity: the HTML viewer's footer names the model and Alpine Animation, and the CLI writes AI-generated synopses to
.txtfiles the user reviews before analysis — users always know what came from the model. - Covered under Anthropic AUP's allowed categories (content creation, research, analysis).
- Covered under OpenAI Usage Policies for text analysis and content-authoring support.
Reviewed 2026-04-15 against both policies as linked above. Any user deploying this library in a high-risk setting (Annex III of the EU AI Act — law-enforcement, essential public services, employment decisions) must re-review; this analysis is intended for creative-industry and research use.
A plain-language description of the pipeline — what each LLM pass does, what enums and rules the prompts encode, and what the code computes deterministically — lives in methodology.md with links into the code. formulas.md has the full rule-by-rule breakdown.