Skip to content

Commit 78ce33a

Browse files
apartsinclaude
andcommitted
Depth pass, labs, bibliography standardization, agent skill updates
- Add 25 numeric grounding examples across all 10 parts (worked traces with concrete numbers after formulas and architecture components) - Add 19 library shortcut code examples showing the same task solved in 3-8 lines using modern Python libraries (torch, transformers, peft, langchain, vllm, sentence-transformers, evaluate, etc.) - Create 19 hands-on labs for Modules 0-12, 27-31, 34-35 (all follow "Right Tool" pattern: from-scratch then library shortcut) - Standardize bibliography format to card-based "References & Further Reading" across 41 section files - Fix content ordering: move 44 callouts/labs placed between whats-next and bibliography to correct position - Update 7 agent skill files with "Right Tool" principle requiring library shortcuts after from-scratch implementations - Enhance SECTION_ORDER audit check to detect content between whats-next and bibliography - Update front matter: fix appendix count, Part VII name, syllabi counts - Add fix scripts: fix_bibliography_format.py, fix_post_whatsnext_content.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent e9a2e1d commit 78ce33a

115 files changed

Lines changed: 6084 additions & 524 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

agents/book-skills/SKILL.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,7 @@ These rules apply to ALL agents in the pipeline:
7878
9. **Code caption position**: Code captions (`<div class="code-caption">`) are placed BELOW the code block (after `</pre>` or after any `.code-output` div), NEVER above it. This is the single most common regression in the pipeline.
7979
10. **Code caption uniqueness**: Every code caption in a file must be unique. No two `<div class="code-caption">` elements in the same file may contain identical text. Each caption must reference specific elements visible in its corresponding code block.
8080
11. **Class name currency**: Use `.part-label` (not `.subtitle`) for the Part label in chapter headers. Files using the old `.subtitle` class must be updated.
81+
12. **"Right Tool" principle**: A core book objective is showing that complex tasks become easy with the right Python library, model, or framework. Every section that teaches a concept from scratch must also include a library shortcut showing the same task solved in a few lines using a modern tool. The reader should see both the pedagogical depth (how it works internally) AND the practical payoff (how little code it takes with the right library). Sections missing this "shortcut follow-up" are incomplete.
8182

8283
## CRITICAL RULE: Mandatory Post-Generation Quality Pass
8384

agents/book-skills/agents/00-chapter-lead.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ You are the Chapter Lead for a textbook chapter production team. You own the cha
2323
- Every code block must be runnable and pedagogically motivated
2424
- Voice must be warm, authoritative, and conversational (like a great professor, not a textbook)
2525
- NEVER use em dashes or double dashes
26+
- **"Right Tool" principle**: After teaching a concept from scratch (internals, math, step-by-step), always follow with a library shortcut showing the same task solved in a few lines using the best available Python library, model, or framework. The reader should see both the pedagogical depth AND the practical payoff. Complex tasks should feel achievable, not intimidating, because the right tools exist.
2627

2728
6. **Final Integration**: Produce the complete HTML chapter file, incorporating all agent feedback, resolving conflicts, and ensuring the chapter reads as one coherent narrative.
2829

agents/book-skills/agents/02-deep-explanation.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,17 @@ Flag any concept that fails one or more of these questions.
7676
- Listing features without explaining mechanisms
7777
- Name-dropping techniques without explaining their core idea
7878

79+
**Important nuance:** The goal is NOT to avoid libraries. It is to explain internals first, then show how the right tool makes the task trivially easy. The teaching sequence is: (1) understand the mechanism from scratch, (2) see that a modern library solves it in a few lines, (3) appreciate what complexity the library handles for you. A section that only shows from-scratch code without mentioning the production shortcut is incomplete. A section that only shows library calls without explaining internals is shallow. Both halves are required.
80+
81+
### Missing "Right Tool" Payoff
82+
A key book objective: after the reader understands a concept's internals, show them that the right library or model collapses the complexity to a few lines of code. Flag sections where:
83+
- A from-scratch implementation exists but no library shortcut follows it
84+
- A complex pipeline is described without mentioning the tool that makes it trivial in practice
85+
- The reader might walk away thinking the task is inherently hard, when in reality picking the right tool (Python library, pre-trained model, framework) makes it easy
86+
- The "payoff moment" is missing: the reader never sees the contrast between manual complexity and tool-assisted simplicity
87+
88+
For each missing payoff, recommend: (a) which library or tool to showcase, (b) how many lines the shortcut would take, (c) what the library handles internally that the from-scratch code had to do manually.
89+
7990
### Missing Mental Models
8091
- Concepts that would benefit from an analogy but lack one
8192
- Abstract ideas that could be grounded with a concrete, physical metaphor

agents/book-skills/agents/08-code-pedagogy.md

Lines changed: 55 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,61 @@ Rules for micro-examples:
6565
- Later examples: build on earlier ones, add one new element at a time
6666
- Final example: brings it together, realistic but not overwhelming
6767

68-
### 5. Reproducibility
68+
### 6. Library Shortcut Examples ("The Right Tool" Pattern)
69+
70+
A core objective of this book is to show readers that complex tasks become trivially easy when you pick the right library. After teaching a concept from scratch (so the reader understands the internals), follow up with a "shortcut" code block that solves the same problem in 3 to 8 lines using a modern, production-quality library.
71+
72+
**Structure:**
73+
1. **From-scratch code** first: the pedagogical implementation that teaches HOW it works internally (existing code blocks).
74+
2. **Library shortcut** second: a concise code block (ideally under 10 lines) using the best available library, showing that the same result is achievable with minimal code. Prefix the code block with a sentence like: "In practice, the same result takes just a few lines with [library name]."
75+
76+
**What to include in shortcut blocks:**
77+
- The library import and the core call (nothing else)
78+
- A brief inline comment naming the from-scratch concept it replaces (e.g., `# replaces our manual attention implementation above`)
79+
- The output, showing it matches the from-scratch version
80+
- A caption that names the library, states how many lines it takes, and notes what complexity the library handles internally
81+
82+
**When to add shortcut blocks:**
83+
- After every from-scratch implementation of a standard algorithm or pipeline step
84+
- When a concept has a well-known library that wraps it (e.g., `sentence-transformers` for embedding, `peft` for LoRA, `langchain` for RAG pipelines, `vllm` for serving)
85+
- When the shortcut demonstrates a 5x or greater reduction in code complexity
86+
- Skip shortcuts for concepts that are inherently educational with no production shortcut (e.g., backpropagation math, tokenizer internals exploration)
87+
88+
**Caption pattern for shortcut blocks:**
89+
```html
90+
<div class="code-caption"><strong>Code Fragment N:</strong> The same [concept] in [M] lines using [library]. The library handles [specific complexities] internally, letting you focus on [the higher-level concern].</div>
91+
```
92+
93+
**Example pair:**
94+
```python
95+
# From-scratch (20+ lines teaching attention internals)
96+
def scaled_dot_product_attention(Q, K, V):
97+
d_k = Q.shape[-1]
98+
scores = Q @ K.transpose(-2, -1) / math.sqrt(d_k)
99+
weights = torch.softmax(scores, dim=-1)
100+
return weights @ V
101+
```
102+
followed by:
103+
```python
104+
# Same operation in one line using PyTorch's built-in
105+
# F.scaled_dot_product_attention handles masking, dropout, and flash attention.
106+
output = F.scaled_dot_product_attention(Q, K, V, is_causal=True)
107+
```
108+
109+
**Library shortcut callout (optional, for high-impact shortcuts):**
110+
When a library reduces a 50+ line implementation to under 5 lines, consider wrapping the shortcut in a callout:
111+
```html
112+
<div class="callout library-shortcut">
113+
<div class="callout-title">&#128218; Library Shortcut</div>
114+
<p>[Library name] condenses the entire [concept] pipeline into [N] lines.
115+
Under the hood, it handles [list of complexities]. See <a href="[appendix link]">Appendix [X]</a> for a deeper dive.</p>
116+
<pre><code class="language-python">
117+
# [concise code here]
118+
</code></pre>
119+
</div>
120+
```
121+
122+
### 7. Reproducibility
69123
- Pin library versions in requirements or comments
70124
- Use deterministic seeds for random operations
71125
- Provide sample data inline or explain how to obtain it

agents/book-skills/agents/33-application-example.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,7 @@ Every Practical Example box follows this structure (all elements required):
6767
- **Diverse roles**: Engineers, PMs, researchers, CTOs, data scientists, ML ops, instructors
6868
- **Diverse industries**: Finance, healthcare, education, e-commerce, media, legal, manufacturing
6969
- **No brand worship**: Do not make stories about how great a specific company is; focus on the decision and lesson
70+
- **Name the tools**: When describing what the team built, always name the specific libraries, frameworks, or models they used. Show that picking the right tool collapsed weeks of custom engineering into a few lines of integration code. The "How" field should mention the library and how little code it took (e.g., "Using LangChain's RetrievalQA chain, the engineer replaced 400 lines of custom retrieval code with a 12-line pipeline").
7071

7172
## Tone
7273

agents/book-skills/agents/36-meta-agent.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,8 @@ For each REVIEWER agent, check whether their recommendations were applied:
6868
- **Missing CSS**: Check that all used CSS classes have definitions
6969
- **Image references**: Check that all `<img src=` paths point to existing files
7070
- **Consistency**: Spot-check terminology, formatting, and tone across chapters
71+
- **"Right Tool" coverage**: For each section with a from-scratch code implementation, check whether a library shortcut follow-up exists. Flag sections where the reader sees only manual complexity without the payoff of "the right tool makes this trivial." Look for the `library-shortcut` callout class or shortcut-pattern captions ("The same [concept] in [N] lines using [library]").
72+
- **Numeric grounding**: For each formula or architecture component, check whether a micro-example with concrete numbers follows it. Flag formulas presented without a worked trace.
7173

7274
### 4. User Request History Analysis
7375

agents/book-skills/agents/40-code-caption-agent.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -150,6 +150,14 @@ For every code block, also verify and fix:
150150
- Import statements should only include what is actually used in the fragment
151151
- Do NOT simplify code that is intentionally showing a complete, production-like pattern
152152

153+
## Library Shortcut Captions
154+
155+
When a code block is a "library shortcut" (a concise version of a preceding from-scratch implementation using a production library), the caption should follow this pattern:
156+
157+
**Caption pattern:** `<strong>Code Fragment N:</strong> The same [concept] in [M] lines using [library]. The library handles [specific complexities] internally, letting you focus on [the higher-level concern].`
158+
159+
The caption must: (a) name the library, (b) state the line count, (c) describe what the library abstracts away. This reinforces the book's core message that the right tool makes complex tasks trivially easy.
160+
153161
## What NOT to Caption
154162

155163
- Code blocks inside callout boxes that are 1 to 3 lines of pseudocode, shell commands, or inline examples

front-matter/about-book.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ <h2>At a Glance</h2>
2727

2828
<p>This book is for anyone who wants to understand, build, and deploy systems powered by large language models: software engineers, ML practitioners, researchers, product leaders, domain specialists, and educators. It assumes familiarity with Python and basic linear algebra; appendices cover the remaining prerequisites.</p>
2929

30-
<p>The book spans <strong>36 chapters</strong> in 10 parts, plus <strong>20 appendices</strong> with framework tutorials, and a <a href="../capstone/index.html">capstone project</a>. For the full chapter map, dependency diagram, audience details, and background requirements, see <a href="section-fm.1.html">FM.1: Introduction</a>. Twenty tailored <a href="pathways/index.html">reading pathways</a> help you find the most relevant chapters for your goals.</p>
30+
<p>The book spans <strong>36 chapters</strong> in 10 parts, plus <strong>22 appendices</strong> (A through V) with framework tutorials, and a <a href="../capstone/index.html">capstone project</a>. For the full chapter map, dependency diagram, audience details, and background requirements, see <a href="section-fm.1a.html">FM.1: What This Book Covers</a>. Twenty tailored <a href="pathways/index.html">reading pathways</a> help you find the most relevant chapters for your goals.</p>
3131

3232
<!-- ============================================================ -->
3333
<!-- HOW THIS BOOK WAS CREATED -->
@@ -45,7 +45,7 @@ <h2>How This Book Was Created</h2>
4545
<nav class="chapter-nav">
4646
<a class="prev" href="about-authors.html">About the Authors</a>
4747
<a class="up" href="index.html">Front Matter</a>
48-
<a class="next" href="section-fm.1.html">Introduction: What This Book Is About &amp; Who It's For</a>
48+
<a class="next" href="section-fm.1a.html">What This Book Covers</a>
4949
</nav>
5050

5151
<footer>

front-matter/index.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,8 +117,8 @@ <h2>Sections</h2>
117117
<span class="section-num">FM.6</span>
118118
<span class="section-title">Course Syllabi</span>
119119
<span class="section-desc">
120-
Four complete 14-week university syllabi (undergraduate and graduate, engineering and research),
121-
each with week-by-week schedules, assignments, and hyperlinked chapter references.
120+
Four complete 14-week university syllabi (undergraduate and graduate, engineering and research)
121+
plus five specialty tracks, each with week-by-week schedules, assignments, and hyperlinked chapter references.
122122
</span>
123123
</a>
124124
</li>

front-matter/section-fm.1a.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ <h2>The Ten Parts</h2>
4545
<li><strong>Training and Adapting</strong> (Chapters 13 through 17, five chapters): Synthetic data generation, fine-tuning fundamentals, parameter-efficient methods (<a class="cross-ref" href="../part-4-training-adapting/module-15-peft/section-15.1.html">LoRA</a>, QLoRA), distillation and model merging, and alignment (RLHF, DPO, preference tuning). You will LoRA fine-tune a 7B-parameter model on domain data and train a reward model for preference alignment.</li>
4646
<li><strong>Retrieval and Conversation</strong> (Chapters 19 through 21, three chapters): Embeddings and vector databases, retrieval-augmented generation (<a class="cross-ref" href="../part-5-retrieval-conversation/module-20-rag/section-20.1.html">RAG</a>), and conversational AI systems. You will build a full document QA pipeline that retrieves, re-ranks, and synthesizes answers from your own corpus.</li>
4747
<li><strong>Agentic AI</strong> (Chapters 22 through 26, five chapters): AI agent foundations, tool use and protocols (MCP, A2A, AG-UI), multi-agent orchestration, specialized agents (code agents, browser agents, scientific agents), and agent safety and production. By Chapter 24, you will have built a multi-agent system where a supervisor delegates tasks to specialized workers that coordinate through shared state.</li>
48-
<li><strong>Multimodal and Applications</strong> (Chapters 27 through 28, two chapters): Multimodal models (vision, audio, cross-modal, document AI) and domain-specific LLM applications (healthcare, finance, legal, code generation, robotics and embodied AI).</li>
48+
<li><strong>AI Applications</strong> (Chapters 27 through 28, two chapters): Multimodal models (vision, audio, cross-modal, document AI) and domain-specific LLM applications (healthcare, finance, legal, code generation, robotics and embodied AI).</li>
4949
<li><strong>Evaluation and Production</strong> (Chapters 29 through 31, three chapters): Evaluation and observability, monitoring in production, and production engineering with LLMOps.</li>
5050
<li><strong>Safety and Strategy</strong> (Chapters 32 through 33, two chapters): Safety, ethics, and regulation (red teaming, EU AI Act, LLM security); organizational strategy for AI adoption, product management, and ROI.</li>
5151
<li><strong>Frontiers</strong> (Chapters 34 through 35, two chapters): Emerging architectures and scaling frontiers; AI and society, open research problems, and the road ahead for 2025 and beyond.</li>

0 commit comments

Comments
 (0)