Skip to content

Commit 88ba266

Browse files
committed
feat: prompt and persist llm key locally
1 parent 780514b commit 88ba266

8 files changed

Lines changed: 173 additions & 20 deletions

File tree

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ code-explainer/.ci-out/
1212
code-explainer/.audit_tmp/
1313
code-explainer/code-explainer-output/
1414
code-explainer/node_modules/
15+
code-explainer/.env
1516

1617
# OS/editor
1718
.DS_Store

PUBLISHING.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,8 @@ Recommended for higher-fidelity diagram rendering:
5656

5757
Even without live LLM access, the proof path still works through `CODE_EXPLAINER_MOCK_LLM=true`. The deterministic Excalidraw scene generator is now the default production path, so the repo does not need to ship the official bridge dependency in its default install surface. If a developer wants to experiment with the official bridge anyway, they can install it locally with `npm install --no-save @excalidraw/mermaid-to-excalidraw` and enable `--enable-official-excalidraw-bridge true`.
5858

59+
For interactive production use, the skill now prompts for a missing LLM key and can persist it into `code-explainer/.env` when the user agrees.
60+
5961
## GitHub Distribution
6062

6163
Suggested install paths for other developers:

README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,8 @@ python scripts/analyze.py analyze \
106106
--enable-excalidraw-export true \
107107
--enable-official-excalidraw-bridge false \
108108
--ask-before-llm-use false \
109-
--prompt-for-llm-key false \
109+
--prompt-for-llm-key true \
110+
--persist-llm-key ask \
110111
--enable-web-enrichment false
111112
```
112113

@@ -120,6 +121,7 @@ Useful controls:
120121
- `--audience nontech|mixed|engineering`
121122
- `--enable-excalidraw-export true|false`
122123
- `--enable-official-excalidraw-bridge true|false`
124+
- `--persist-llm-key ask|true|false`
123125

124126
## Diagram and Excalidraw Behavior
125127

@@ -133,6 +135,8 @@ Useful controls:
133135
- Live model path: set `CODE_EXPLAINER_LLM_API_KEY` or `OPENAI_API_KEY`
134136
- Optional overrides: `CODE_EXPLAINER_LLM_BASE_URL`, `CODE_EXPLAINER_LLM_MODEL`
135137
- Offline proof path: set `CODE_EXPLAINER_MOCK_LLM=true`
138+
- If no key is available, the skill prompts for one by default in an interactive terminal.
139+
- The user can choose to persist that key into `code-explainer/.env` for future runs.
136140

137141
If live LLM access is unavailable, the skill records that downgrade in `meta/llm_summary.json`.
138142

code-explainer/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
.env

code-explainer/SKILL.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,7 @@ python scripts/analyze.py analyze \
5555
--enable-official-excalidraw-bridge <true|false> \
5656
--ask-before-llm-use <true|false> \
5757
--prompt-for-llm-key <true|false> \
58+
--persist-llm-key <ask|true|false> \
5859
--enable-web-enrichment <true|false>
5960
```
6061

@@ -69,15 +70,18 @@ Defaults:
6970
- `enable-excalidraw-export=true`
7071
- `enable-official-excalidraw-bridge=false`
7172
- `ask-before-llm-use=false`
72-
- `prompt-for-llm-key=false`
73+
- `prompt-for-llm-key=true`
74+
- `persist-llm-key=ask`
7375
- `enable-web-enrichment=true`
7476

7577
## LLM Behavior
7678

7779
- The high-quality path is explanation-first and uses `scripts/llm_describe.py`.
80+
- The LLM path is the required production path for this skill.
7881
- If `CODE_EXPLAINER_LLM_API_KEY` or `OPENAI_API_KEY` is set, the skill can use a live model.
79-
- If live LLM access is unavailable, the pipeline falls back to grounded deterministic wording and records that downgrade in `meta/llm_summary.json`.
80-
- For proof runs and offline regression tests, set `CODE_EXPLAINER_MOCK_LLM=true` to exercise the full explanation pipeline without network access.
82+
- If no key is available, the skill should prompt for one when the terminal is interactive.
83+
- The user can choose to persist the provided key in a local `.env` file for future runs.
84+
- `CODE_EXPLAINER_MOCK_LLM=true` is only for explicit development or offline test scenarios and is not the normal production path.
8185

8286
## Workflow
8387

@@ -130,6 +134,7 @@ bash ./scripts/install_runtime.sh
130134
## Notes
131135

132136
- This skill does not mutate the analyzed repository.
137+
- The skill may create or update a local `.env` file in the skill directory when the user chooses to persist the prompted LLM key.
133138
- If the explanation-quality score is below the rubric threshold, treat the output as failed even if files were produced.
134139
- If Excalidraw export is enabled, treat missing or partial editable scene generation as a real quality issue, not a cosmetic extra.
135140
- The deterministic local Excalidraw exporter is the canonical production path.

code-explainer/references/output-contract.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,9 @@
151151
- `caveats[]`
152152
- `confidence_notes[]`
153153
- `error`
154+
- `key_source`
155+
- `persisted_key`
156+
- `env_path`
154157

155158
## Fact-Check Schema
156159

code-explainer/scripts/analyze.py

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
from __future__ import annotations
33

44
import argparse
5+
import os
56
import shutil
67
import sys
78
import tempfile
@@ -151,7 +152,8 @@ def run_pipeline(
151152
enable_excalidraw_export: bool,
152153
enable_official_excalidraw_bridge: bool = False,
153154
ask_before_llm_use: bool = False,
154-
prompt_for_llm_key: bool = False,
155+
prompt_for_llm_key: bool = True,
156+
persist_llm_key: str = "ask",
155157
include_globs: List[str] | None = None,
156158
exclude_globs: List[str] | None = None,
157159
since: str = "2 weeks ago",
@@ -167,6 +169,15 @@ def run_pipeline(
167169
common.ensure_dir(output_root / "diagrams" / "png")
168170
common.ensure_dir(output_root / "html")
169171

172+
resolved_llm_runtime: Dict[str, Any] | None = None
173+
use_mock_llm = common.bool_from_string(os.environ.get("CODE_EXPLAINER_MOCK_LLM", "false"))
174+
if enable_llm_descriptions and not use_mock_llm:
175+
resolved_llm_runtime = llm_describe.resolve_llm_runtime(
176+
prompt_for_key=prompt_for_llm_key,
177+
persist_key_mode=persist_llm_key,
178+
require_key=True,
179+
)
180+
170181
repo_root, should_cleanup, cleanup_root = _resolve_source(source)
171182
try:
172183
include_globs = include_globs or []
@@ -239,6 +250,8 @@ def run_pipeline(
239250
enabled=enable_llm_descriptions,
240251
ask_before_use=ask_before_llm_use,
241252
prompt_for_key=prompt_for_llm_key,
253+
persist_key_mode=persist_llm_key,
254+
resolved_runtime=resolved_llm_runtime,
242255
)
243256

244257
diagram_manifest = build_diagrams.build_diagrams(
@@ -406,7 +419,8 @@ def _parse_args() -> argparse.Namespace:
406419
parser.add_argument("--enable-excalidraw-export", default="true")
407420
parser.add_argument("--enable-official-excalidraw-bridge", default="false")
408421
parser.add_argument("--ask-before-llm-use", default="false")
409-
parser.add_argument("--prompt-for-llm-key", default="false")
422+
parser.add_argument("--prompt-for-llm-key", default="true")
423+
parser.add_argument("--persist-llm-key", default="ask", choices=["ask", "true", "false"])
410424
return parser.parse_args()
411425

412426

@@ -437,6 +451,7 @@ def main() -> int:
437451
enable_official_excalidraw_bridge=official_excalidraw_bridge_enabled,
438452
ask_before_llm_use=ask_before_llm_use,
439453
prompt_for_llm_key=prompt_for_llm_key,
454+
persist_llm_key=args.persist_llm_key,
440455
include_globs=args.include_glob,
441456
exclude_globs=args.exclude_glob,
442457
since=args.since,

code-explainer/scripts/llm_describe.py

Lines changed: 136 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@
1313
from typing import Any, Dict, List, Tuple
1414

1515
SCRIPT_DIR = Path(__file__).resolve().parent
16+
SKILL_DIR = SCRIPT_DIR.parent
17+
LOCAL_ENV_PATH = SKILL_DIR / ".env"
1618
if str(SCRIPT_DIR) not in sys.path:
1719
sys.path.insert(0, str(SCRIPT_DIR))
1820

@@ -47,6 +49,112 @@ def _prompt_api_key() -> str:
4749
return ""
4850

4951

52+
def _confirm_persist_key(env_path: Path) -> bool:
53+
answer = input(f"Save the LLM API key to {env_path.as_posix()} for future runs? [y/N]: ").strip().lower()
54+
return answer in {"y", "yes"}
55+
56+
57+
def _read_local_env(path: Path) -> Dict[str, str]:
58+
values: Dict[str, str] = {}
59+
if not path.exists():
60+
return values
61+
for raw in path.read_text(encoding="utf-8", errors="ignore").splitlines():
62+
line = raw.strip()
63+
if not line or line.startswith("#") or "=" not in line:
64+
continue
65+
key, value = line.split("=", 1)
66+
values[key.strip()] = value.strip().strip('"').strip("'")
67+
return values
68+
69+
70+
def _write_local_env_key(path: Path, key: str, value: str) -> None:
71+
existing_lines: List[str] = []
72+
if path.exists():
73+
existing_lines = path.read_text(encoding="utf-8", errors="ignore").splitlines()
74+
75+
updated = False
76+
next_lines: List[str] = []
77+
for raw in existing_lines:
78+
if raw.strip().startswith(f"{key}="):
79+
next_lines.append(f"{key}={value}")
80+
updated = True
81+
else:
82+
next_lines.append(raw)
83+
if not updated:
84+
next_lines.append(f"{key}={value}")
85+
86+
path.write_text("\n".join(next_lines).rstrip() + "\n", encoding="utf-8")
87+
88+
89+
def resolve_llm_runtime(
90+
prompt_for_key: bool = True,
91+
persist_key_mode: str = "ask",
92+
require_key: bool = True,
93+
) -> Dict[str, Any]:
94+
model = os.environ.get("CODE_EXPLAINER_LLM_MODEL", DEFAULT_MODEL).strip() or DEFAULT_MODEL
95+
base_url = _normalize_base_url(os.environ.get("CODE_EXPLAINER_LLM_BASE_URL", DEFAULT_BASE_URL))
96+
local_env = _read_local_env(LOCAL_ENV_PATH)
97+
interactive = _is_interactive_terminal()
98+
prompted = False
99+
persisted = False
100+
key_source = ""
101+
102+
api_key = os.environ.get("CODE_EXPLAINER_LLM_API_KEY", "").strip()
103+
if api_key:
104+
key_source = "environment:CODE_EXPLAINER_LLM_API_KEY"
105+
if not api_key:
106+
api_key = os.environ.get("OPENAI_API_KEY", "").strip()
107+
if api_key:
108+
key_source = "environment:OPENAI_API_KEY"
109+
if not api_key:
110+
api_key = local_env.get("CODE_EXPLAINER_LLM_API_KEY", "").strip()
111+
if api_key:
112+
key_source = f"local-env:{LOCAL_ENV_PATH.name}"
113+
if not api_key:
114+
api_key = local_env.get("OPENAI_API_KEY", "").strip()
115+
if api_key:
116+
key_source = f"local-env:{LOCAL_ENV_PATH.name}"
117+
118+
if not api_key and prompt_for_key:
119+
prompted = True
120+
if not interactive:
121+
raise RuntimeError(
122+
"No LLM API key was found and this terminal cannot prompt. Set CODE_EXPLAINER_LLM_API_KEY or OPENAI_API_KEY, "
123+
f"or add CODE_EXPLAINER_LLM_API_KEY to {LOCAL_ENV_PATH.as_posix()}."
124+
)
125+
api_key = _prompt_api_key()
126+
if not api_key and require_key:
127+
raise RuntimeError("LLM API key is required for this skill and was not provided.")
128+
key_source = "prompt"
129+
persist_mode = (persist_key_mode or "ask").strip().lower()
130+
should_persist = False
131+
if api_key:
132+
if persist_mode == "true":
133+
should_persist = True
134+
elif persist_mode == "ask":
135+
should_persist = _confirm_persist_key(LOCAL_ENV_PATH)
136+
if should_persist:
137+
_write_local_env_key(LOCAL_ENV_PATH, "CODE_EXPLAINER_LLM_API_KEY", api_key)
138+
persisted = True
139+
key_source = f"local-env:{LOCAL_ENV_PATH.name}"
140+
141+
if not api_key and require_key:
142+
raise RuntimeError(
143+
"No LLM API key found. Set CODE_EXPLAINER_LLM_API_KEY or OPENAI_API_KEY, "
144+
f"or add CODE_EXPLAINER_LLM_API_KEY to {LOCAL_ENV_PATH.as_posix()}."
145+
)
146+
147+
return {
148+
"api_key": api_key,
149+
"model": model,
150+
"base_url": base_url,
151+
"prompted_for_key": prompted,
152+
"persisted_key": persisted,
153+
"key_source": key_source,
154+
"env_path": LOCAL_ENV_PATH.as_posix(),
155+
}
156+
157+
50158
def _post_json(url: str, api_key: str, payload: Dict[str, Any], timeout: int = 90) -> Tuple[int, str]:
51159
data = json.dumps(payload).encode("utf-8")
52160
req = urllib.request.Request(
@@ -108,6 +216,9 @@ def _default_llm_payload(enabled: bool, model: str) -> Dict[str, Any]:
108216
"diagram_briefs": [],
109217
"caveats": [],
110218
"confidence_notes": [],
219+
"key_source": "",
220+
"persisted_key": False,
221+
"env_path": LOCAL_ENV_PATH.as_posix(),
111222
"error": "",
112223
}
113224

@@ -268,7 +379,9 @@ def generate_llm_descriptions(
268379
out_dir: Path,
269380
enabled: bool = True,
270381
ask_before_use: bool = False,
271-
prompt_for_key: bool = False,
382+
prompt_for_key: bool = True,
383+
persist_key_mode: str = "ask",
384+
resolved_runtime: Dict[str, Any] | None = None,
272385
) -> Dict[str, Any]:
273386
del repo_root, index_payload, entry_payload, docs_payload
274387
model = os.environ.get("CODE_EXPLAINER_LLM_MODEL", DEFAULT_MODEL).strip() or DEFAULT_MODEL
@@ -308,27 +421,34 @@ def generate_llm_descriptions(
308421
common.write_json(out_dir / "llm_summary.json", payload)
309422
return payload
310423

311-
api_key = (
312-
os.environ.get("CODE_EXPLAINER_LLM_API_KEY", "").strip()
313-
or os.environ.get("OPENAI_API_KEY", "").strip()
314-
)
315-
if not api_key and prompt_for_key:
316-
payload["prompted_for_key"] = True
317-
if not interactive:
318-
payload["error"] = "Prompt-for-key requested but terminal is non-interactive and no key was found."
424+
runtime = resolved_runtime
425+
if runtime is None:
426+
try:
427+
runtime = resolve_llm_runtime(
428+
prompt_for_key=prompt_for_key,
429+
persist_key_mode=persist_key_mode,
430+
require_key=True,
431+
)
432+
except RuntimeError as exc:
433+
payload["error"] = str(exc)
319434
common.write_json(out_dir / "llm_summary.json", payload)
320435
return payload
321-
api_key = _prompt_api_key()
436+
437+
api_key = str(runtime.get("api_key", "")).strip()
438+
payload["prompted_for_key"] = bool(runtime.get("prompted_for_key", False))
439+
payload["persisted_key"] = bool(runtime.get("persisted_key", False))
440+
payload["key_source"] = str(runtime.get("key_source", "")).strip()
441+
payload["env_path"] = str(runtime.get("env_path", LOCAL_ENV_PATH.as_posix()))
442+
payload["model"] = str(runtime.get("model", model)).strip() or model
322443

323444
if not api_key:
324445
payload["error"] = "No API key found (set CODE_EXPLAINER_LLM_API_KEY or OPENAI_API_KEY)."
325446
common.write_json(out_dir / "llm_summary.json", payload)
326447
return payload
327448

328-
base_url = _normalize_base_url(os.environ.get("CODE_EXPLAINER_LLM_BASE_URL", DEFAULT_BASE_URL))
329-
endpoint = f"{base_url}/chat/completions"
449+
endpoint = f"{str(runtime.get('base_url', DEFAULT_BASE_URL)).rstrip('/')}/chat/completions"
330450
request_payload = {
331-
"model": model,
451+
"model": payload["model"],
332452
"temperature": 0.15,
333453
"response_format": {"type": "json_object"},
334454
"messages": _request_messages(request_context),
@@ -394,7 +514,8 @@ def main() -> int:
394514
parser.add_argument("--output", required=True)
395515
parser.add_argument("--enabled", default="true")
396516
parser.add_argument("--ask-before-use", default="false")
397-
parser.add_argument("--prompt-for-key", default="false")
517+
parser.add_argument("--prompt-for-key", default="true")
518+
parser.add_argument("--persist-key", default="ask")
398519
args = parser.parse_args()
399520

400521
payload = generate_llm_descriptions(
@@ -415,6 +536,7 @@ def main() -> int:
415536
enabled=common.bool_from_string(args.enabled),
416537
ask_before_use=common.bool_from_string(args.ask_before_use),
417538
prompt_for_key=common.bool_from_string(args.prompt_for_key),
539+
persist_key_mode=args.persist_key,
418540
)
419541
print(json.dumps({"used": payload.get("used", False), "provider": payload.get("provider", ""), "error": payload.get("error", "")}, indent=2))
420542
return 0

0 commit comments

Comments
 (0)