You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Add 25 numeric grounding examples across all 10 parts (worked traces
with concrete numbers after formulas and architecture components)
- Add 19 library shortcut code examples showing the same task solved
in 3-8 lines using modern Python libraries (torch, transformers,
peft, langchain, vllm, sentence-transformers, evaluate, etc.)
- Create 19 hands-on labs for Modules 0-12, 27-31, 34-35 (all follow
"Right Tool" pattern: from-scratch then library shortcut)
- Standardize bibliography format to card-based "References & Further
Reading" across 41 section files
- Fix content ordering: move 44 callouts/labs placed between whats-next
and bibliography to correct position
- Update 7 agent skill files with "Right Tool" principle requiring
library shortcuts after from-scratch implementations
- Enhance SECTION_ORDER audit check to detect content between
whats-next and bibliography
- Update front matter: fix appendix count, Part VII name, syllabi counts
- Add fix scripts: fix_bibliography_format.py, fix_post_whatsnext_content.py
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: agents/book-skills/SKILL.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,6 +78,7 @@ These rules apply to ALL agents in the pipeline:
78
78
9.**Code caption position**: Code captions (`<div class="code-caption">`) are placed BELOW the code block (after `</pre>` or after any `.code-output` div), NEVER above it. This is the single most common regression in the pipeline.
79
79
10.**Code caption uniqueness**: Every code caption in a file must be unique. No two `<div class="code-caption">` elements in the same file may contain identical text. Each caption must reference specific elements visible in its corresponding code block.
80
80
11.**Class name currency**: Use `.part-label` (not `.subtitle`) for the Part label in chapter headers. Files using the old `.subtitle` class must be updated.
81
+
12.**"Right Tool" principle**: A core book objective is showing that complex tasks become easy with the right Python library, model, or framework. Every section that teaches a concept from scratch must also include a library shortcut showing the same task solved in a few lines using a modern tool. The reader should see both the pedagogical depth (how it works internally) AND the practical payoff (how little code it takes with the right library). Sections missing this "shortcut follow-up" are incomplete.
Copy file name to clipboardExpand all lines: agents/book-skills/agents/00-chapter-lead.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,7 @@ You are the Chapter Lead for a textbook chapter production team. You own the cha
23
23
- Every code block must be runnable and pedagogically motivated
24
24
- Voice must be warm, authoritative, and conversational (like a great professor, not a textbook)
25
25
- NEVER use em dashes or double dashes
26
+
-**"Right Tool" principle**: After teaching a concept from scratch (internals, math, step-by-step), always follow with a library shortcut showing the same task solved in a few lines using the best available Python library, model, or framework. The reader should see both the pedagogical depth AND the practical payoff. Complex tasks should feel achievable, not intimidating, because the right tools exist.
26
27
27
28
6.**Final Integration**: Produce the complete HTML chapter file, incorporating all agent feedback, resolving conflicts, and ensuring the chapter reads as one coherent narrative.
Copy file name to clipboardExpand all lines: agents/book-skills/agents/02-deep-explanation.md
+11Lines changed: 11 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,6 +76,17 @@ Flag any concept that fails one or more of these questions.
76
76
- Listing features without explaining mechanisms
77
77
- Name-dropping techniques without explaining their core idea
78
78
79
+
**Important nuance:** The goal is NOT to avoid libraries. It is to explain internals first, then show how the right tool makes the task trivially easy. The teaching sequence is: (1) understand the mechanism from scratch, (2) see that a modern library solves it in a few lines, (3) appreciate what complexity the library handles for you. A section that only shows from-scratch code without mentioning the production shortcut is incomplete. A section that only shows library calls without explaining internals is shallow. Both halves are required.
80
+
81
+
### Missing "Right Tool" Payoff
82
+
A key book objective: after the reader understands a concept's internals, show them that the right library or model collapses the complexity to a few lines of code. Flag sections where:
83
+
- A from-scratch implementation exists but no library shortcut follows it
84
+
- A complex pipeline is described without mentioning the tool that makes it trivial in practice
85
+
- The reader might walk away thinking the task is inherently hard, when in reality picking the right tool (Python library, pre-trained model, framework) makes it easy
86
+
- The "payoff moment" is missing: the reader never sees the contrast between manual complexity and tool-assisted simplicity
87
+
88
+
For each missing payoff, recommend: (a) which library or tool to showcase, (b) how many lines the shortcut would take, (c) what the library handles internally that the from-scratch code had to do manually.
89
+
79
90
### Missing Mental Models
80
91
- Concepts that would benefit from an analogy but lack one
81
92
- Abstract ideas that could be grounded with a concrete, physical metaphor
Copy file name to clipboardExpand all lines: agents/book-skills/agents/08-code-pedagogy.md
+55-1Lines changed: 55 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,61 @@ Rules for micro-examples:
65
65
- Later examples: build on earlier ones, add one new element at a time
66
66
- Final example: brings it together, realistic but not overwhelming
67
67
68
-
### 5. Reproducibility
68
+
### 6. Library Shortcut Examples ("The Right Tool" Pattern)
69
+
70
+
A core objective of this book is to show readers that complex tasks become trivially easy when you pick the right library. After teaching a concept from scratch (so the reader understands the internals), follow up with a "shortcut" code block that solves the same problem in 3 to 8 lines using a modern, production-quality library.
71
+
72
+
**Structure:**
73
+
1.**From-scratch code** first: the pedagogical implementation that teaches HOW it works internally (existing code blocks).
74
+
2.**Library shortcut** second: a concise code block (ideally under 10 lines) using the best available library, showing that the same result is achievable with minimal code. Prefix the code block with a sentence like: "In practice, the same result takes just a few lines with [library name]."
75
+
76
+
**What to include in shortcut blocks:**
77
+
- The library import and the core call (nothing else)
78
+
- A brief inline comment naming the from-scratch concept it replaces (e.g., `# replaces our manual attention implementation above`)
79
+
- The output, showing it matches the from-scratch version
80
+
- A caption that names the library, states how many lines it takes, and notes what complexity the library handles internally
81
+
82
+
**When to add shortcut blocks:**
83
+
- After every from-scratch implementation of a standard algorithm or pipeline step
84
+
- When a concept has a well-known library that wraps it (e.g., `sentence-transformers` for embedding, `peft` for LoRA, `langchain` for RAG pipelines, `vllm` for serving)
85
+
- When the shortcut demonstrates a 5x or greater reduction in code complexity
86
+
- Skip shortcuts for concepts that are inherently educational with no production shortcut (e.g., backpropagation math, tokenizer internals exploration)
87
+
88
+
**Caption pattern for shortcut blocks:**
89
+
```html
90
+
<divclass="code-caption"><strong>Code Fragment N:</strong> The same [concept] in [M] lines using [library]. The library handles [specific complexities] internally, letting you focus on [the higher-level concern].</div>
-**No brand worship**: Do not make stories about how great a specific company is; focus on the decision and lesson
70
+
-**Name the tools**: When describing what the team built, always name the specific libraries, frameworks, or models they used. Show that picking the right tool collapsed weeks of custom engineering into a few lines of integration code. The "How" field should mention the library and how little code it took (e.g., "Using LangChain's RetrievalQA chain, the engineer replaced 400 lines of custom retrieval code with a 12-line pipeline").
Copy file name to clipboardExpand all lines: agents/book-skills/agents/36-meta-agent.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,6 +68,8 @@ For each REVIEWER agent, check whether their recommendations were applied:
68
68
-**Missing CSS**: Check that all used CSS classes have definitions
69
69
-**Image references**: Check that all `<img src=` paths point to existing files
70
70
-**Consistency**: Spot-check terminology, formatting, and tone across chapters
71
+
-**"Right Tool" coverage**: For each section with a from-scratch code implementation, check whether a library shortcut follow-up exists. Flag sections where the reader sees only manual complexity without the payoff of "the right tool makes this trivial." Look for the `library-shortcut` callout class or shortcut-pattern captions ("The same [concept] in [N] lines using [library]").
72
+
-**Numeric grounding**: For each formula or architecture component, check whether a micro-example with concrete numbers follows it. Flag formulas presented without a worked trace.
Copy file name to clipboardExpand all lines: agents/book-skills/agents/40-code-caption-agent.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -150,6 +150,14 @@ For every code block, also verify and fix:
150
150
- Import statements should only include what is actually used in the fragment
151
151
- Do NOT simplify code that is intentionally showing a complete, production-like pattern
152
152
153
+
## Library Shortcut Captions
154
+
155
+
When a code block is a "library shortcut" (a concise version of a preceding from-scratch implementation using a production library), the caption should follow this pattern:
156
+
157
+
**Caption pattern:**`<strong>Code Fragment N:</strong> The same [concept] in [M] lines using [library]. The library handles [specific complexities] internally, letting you focus on [the higher-level concern].`
158
+
159
+
The caption must: (a) name the library, (b) state the line count, (c) describe what the library abstracts away. This reinforces the book's core message that the right tool makes complex tasks trivially easy.
160
+
153
161
## What NOT to Caption
154
162
155
163
- Code blocks inside callout boxes that are 1 to 3 lines of pseudocode, shell commands, or inline examples
Copy file name to clipboardExpand all lines: front-matter/about-book.html
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ <h2>At a Glance</h2>
27
27
28
28
<p>This book is for anyone who wants to understand, build, and deploy systems powered by large language models: software engineers, ML practitioners, researchers, product leaders, domain specialists, and educators. It assumes familiarity with Python and basic linear algebra; appendices cover the remaining prerequisites.</p>
29
29
30
-
<p>The book spans <strong>36 chapters</strong> in 10 parts, plus <strong>20 appendices</strong> with framework tutorials, and a <ahref="../capstone/index.html">capstone project</a>. For the full chapter map, dependency diagram, audience details, and background requirements, see <ahref="section-fm.1.html">FM.1: Introduction</a>. Twenty tailored <ahref="pathways/index.html">reading pathways</a> help you find the most relevant chapters for your goals.</p>
30
+
<p>The book spans <strong>36 chapters</strong> in 10 parts, plus <strong>22 appendices</strong>(A through V) with framework tutorials, and a <ahref="../capstone/index.html">capstone project</a>. For the full chapter map, dependency diagram, audience details, and background requirements, see <ahref="section-fm.1a.html">FM.1: What This Book Covers</a>. Twenty tailored <ahref="pathways/index.html">reading pathways</a> help you find the most relevant chapters for your goals.</p>
Copy file name to clipboardExpand all lines: front-matter/section-fm.1a.html
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ <h2>The Ten Parts</h2>
45
45
<li><strong>Training and Adapting</strong> (Chapters 13 through 17, five chapters): Synthetic data generation, fine-tuning fundamentals, parameter-efficient methods (<aclass="cross-ref" href="../part-4-training-adapting/module-15-peft/section-15.1.html">LoRA</a>, QLoRA), distillation and model merging, and alignment (RLHF, DPO, preference tuning). You will LoRA fine-tune a 7B-parameter model on domain data and train a reward model for preference alignment.</li>
46
46
<li><strong>Retrieval and Conversation</strong> (Chapters 19 through 21, three chapters): Embeddings and vector databases, retrieval-augmented generation (<aclass="cross-ref" href="../part-5-retrieval-conversation/module-20-rag/section-20.1.html">RAG</a>), and conversational AI systems. You will build a full document QA pipeline that retrieves, re-ranks, and synthesizes answers from your own corpus.</li>
47
47
<li><strong>Agentic AI</strong> (Chapters 22 through 26, five chapters): AI agent foundations, tool use and protocols (MCP, A2A, AG-UI), multi-agent orchestration, specialized agents (code agents, browser agents, scientific agents), and agent safety and production. By Chapter 24, you will have built a multi-agent system where a supervisor delegates tasks to specialized workers that coordinate through shared state.</li>
48
-
<li><strong>Multimodal and Applications</strong> (Chapters 27 through 28, two chapters): Multimodal models (vision, audio, cross-modal, document AI) and domain-specific LLM applications (healthcare, finance, legal, code generation, robotics and embodied AI).</li>
48
+
<li><strong>AI Applications</strong> (Chapters 27 through 28, two chapters): Multimodal models (vision, audio, cross-modal, document AI) and domain-specific LLM applications (healthcare, finance, legal, code generation, robotics and embodied AI).</li>
49
49
<li><strong>Evaluation and Production</strong> (Chapters 29 through 31, three chapters): Evaluation and observability, monitoring in production, and production engineering with LLMOps.</li>
50
50
<li><strong>Safety and Strategy</strong> (Chapters 32 through 33, two chapters): Safety, ethics, and regulation (red teaming, EU AI Act, LLM security); organizational strategy for AI adoption, product management, and ROI.</li>
51
51
<li><strong>Frontiers</strong> (Chapters 34 through 35, two chapters): Emerging architectures and scaling frontiers; AI and society, open research problems, and the road ahead for 2025 and beyond.</li>
0 commit comments