You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .ai/AGENTS.md
+3-8Lines changed: 3 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,10 +10,6 @@ Strive to write code as simple and explicit as possible.
10
10
11
11
---
12
12
13
-
### Dependencies
14
-
- No new mandatory dependency without discussion (e.g. `einops`)
15
-
- Optional deps guarded with `is_X_available()` and a dummy in `utils/dummy_*.py`
16
-
17
13
## Code formatting
18
14
-`make style` and `make fix-copies` should be run as the final step before opening a PR
19
15
@@ -23,11 +19,10 @@ Strive to write code as simple and explicit as possible.
23
19
- Remove the header to intentionally break the link
24
20
25
21
### Models
26
-
- All layer calls should be visible directly in `forward` — avoid helper functions that hide `nn.Module` calls.
27
-
- Avoid graph breaks for `torch.compile` compatibility — do not insert NumPy operations in forward implementations and any other patterns that can break `torch.compile` compatibility with `fullgraph=True`.
28
-
- See the **model-integration** skill for the attention pattern, pipeline rules, test setup instructions, and other important details.
22
+
- See [models.md](models.md) for conventions, attention pattern, implementation rules, dependencies, and gotchas.
23
+
- See the [model-integration](./skills/model-integration/SKILL.md) skill for the full integration workflow, file structure, test setup, and other details.
29
24
30
25
## Skills
31
26
32
27
Task-specific guides live in `.ai/skills/` and are loaded on demand by AI agents.
33
-
Available skills: **model-integration** (adding/converting pipelines), **parity-testing** (debugging numerical parity).
28
+
Available skills: [model-integration](./skills/model-integration/SKILL.md) (adding/converting pipelines), [parity-testing](./skills/parity-testing/SKILL.md) (debugging numerical parity).
Shared reference for model-related conventions, patterns, and gotchas.
4
+
Linked from `AGENTS.md`, `skills/model-integration/SKILL.md`, and `review-rules.md`.
5
+
6
+
## Coding style
7
+
8
+
- All layer calls should be visible directly in `forward` — avoid helper functions that hide `nn.Module` calls.
9
+
- Avoid graph breaks for `torch.compile` compatibility — do not insert NumPy operations in forward implementations and any other patterns that can break `torch.compile` compatibility with `fullgraph=True`.
10
+
- No new mandatory dependency without discussion (e.g. `einops`). Optional deps guarded with `is_X_available()` and a dummy in `utils/dummy_*.py`.
11
+
12
+
## Common diffusers conventions
13
+
14
+
- Models use `ModelMixin` with `register_to_config` for config serialization
15
+
- Pipelines inherit from `DiffusionPipeline`
16
+
- Schedulers use `SchedulerMixin` with `ConfigMixin`
17
+
- Use `@torch.no_grad()` on pipeline `__call__`
18
+
- Support `output_type="latent"` for skipping VAE decode
19
+
- Support `generator` parameter for reproducibility
20
+
- Use `self.progress_bar(timesteps)` for progress tracking
21
+
22
+
## Attention pattern
23
+
24
+
Attention must follow the diffusers pattern: both the `Attention` class and its processor are defined in the model file. The processor's `__call__` handles the actual compute and must use `dispatch_attention_fn` rather than calling `F.scaled_dot_product_attention` directly. The attention class inherits `AttentionModuleMixin` and declares `_default_processor_cls` and `_available_processors`.
Consult the implementations in `src/diffusers/models/transformers/` if you need further references.
65
+
66
+
## Implementation rules
67
+
68
+
1.**Don't combine structural changes with behavioral changes.** Restructuring code to fit diffusers APIs (ModelMixin, ConfigMixin, etc.) is unavoidable. But don't also "improve" the algorithm, refactor computation order, or rename internal variables for aesthetics. Keep numerical logic as close to the reference as possible, even if it looks unclean. For standard → modular, this is stricter: copy loop logic verbatim and only restructure into blocks. Clean up in a separate commit after parity is confirmed.
69
+
2.**Pipelines must inherit from `DiffusionPipeline`.** Consult implementations in `src/diffusers/pipelines` in case you need references.
70
+
3.**Don't subclass an existing pipeline for a variant.** DO NOT use an existing pipeline class (e.g., `FluxPipeline`) to override another pipeline (e.g., `FluxImg2ImgPipeline`) which will be a part of the core codebase (`src`).
71
+
72
+
## Gotchas
73
+
74
+
1.**Forgetting `__init__.py` lazy imports.** Every new class must be registered in the appropriate `__init__.py` with lazy imports. Missing this causes `ImportError` that only shows up when users try `from diffusers import YourNewClass`.
75
+
76
+
2.**Using `einops` or other non-PyTorch deps.** Reference implementations often use `einops.rearrange`. Always rewrite with native PyTorch (`reshape`, `permute`, `unflatten`). Don't add the dependency. If a dependency is truly unavoidable, guard its import: `if is_my_dependency_available(): import my_dependency`.
77
+
78
+
3.**Missing `make fix-copies` after `# Copied from`.** If you add `# Copied from` annotations, you must run `make fix-copies` to propagate them. CI will fail otherwise.
79
+
80
+
4.**Wrong `_supports_cache_class` / `_no_split_modules`.** These class attributes control KV cache and device placement. Copy from a similar model and verify -- wrong values cause silent correctness bugs or OOM errors.
81
+
82
+
5.**Missing `@torch.no_grad()` on pipeline `__call__`.** Forgetting this causes GPU OOM from gradient accumulation during inference.
83
+
84
+
6.**Config serialization gaps.** Every `__init__` parameter in a `ModelMixin` subclass must be captured by `register_to_config`. If you add a new param but forget to register it, `from_pretrained` will silently use the default instead of the saved value.
85
+
86
+
7.**Forgetting to update `_import_structure` and `_lazy_modules`.** The top-level `src/diffusers/__init__.py` has both -- missing either one causes partial import failures.
87
+
88
+
8.**Hardcoded dtype in model forward.** Don't hardcode `torch.float32` or `torch.bfloat16` in the model's forward pass. Use the dtype of the input tensors or `self.dtype` so the model works with any precision.
Copy file name to clipboardExpand all lines: .ai/skills/model-integration/SKILL.md
+2-76Lines changed: 2 additions & 76 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,89 +65,15 @@ docs/source/en/api/
65
65
-[ ] Run `make style` and `make quality`
66
66
-[ ] Test parity with reference implementation (see `parity-testing` skill)
67
67
68
-
### Attention pattern
69
-
70
-
Attention must follow the diffusers pattern: both the `Attention` class and its processor are defined in the model file. The processor's `__call__` handles the actual compute and must use `dispatch_attention_fn` rather than calling `F.scaled_dot_product_attention` directly. The attention class inherits `AttentionModuleMixin` and declares `_default_processor_cls` and `_available_processors`.
Consult the implementations in `src/diffusers/models/transformers/` if you need further references.
68
+
### Model conventions, attention pattern, and implementation rules
111
69
112
-
### Implementation rules
113
-
114
-
1.**Don't combine structural changes with behavioral changes.** Restructuring code to fit diffusers APIs (ModelMixin, ConfigMixin, etc.) is unavoidable. But don't also "improve" the algorithm, refactor computation order, or rename internal variables for aesthetics. Keep numerical logic as close to the reference as possible, even if it looks unclean. For standard → modular, this is stricter: copy loop logic verbatim and only restructure into blocks. Clean up in a separate commit after parity is confirmed.
115
-
2.**Pipelines must inherit from `DiffusionPipeline`.** Consult implementations in `src/diffusers/pipelines` in case you need references.
116
-
3.**Don't subclass an existing pipeline for a variant.** DO NOT use an existing pipeline class (e.g., `FluxPipeline`) to override another pipeline (e.g., `FluxImg2ImgPipeline`) which will be a part of the core codebase (`src`).
70
+
See [../../models.md](../../models.md) for the attention pattern, implementation rules, common conventions, dependencies, and gotchas. These apply to all model work.
117
71
118
72
### Test setup
119
73
120
74
- Slow tests gated with `@slow` and `RUN_SLOW=1`
121
75
- All model-level tests must use the `BaseModelTesterConfig`, `ModelTesterMixin`, `MemoryTesterMixin`, `AttentionTesterMixin`, `LoraTesterMixin`, and `TrainingTesterMixin` classes initially to write the tests. Any additional tests should be added after discussions with the maintainers. Use `tests/models/transformers/test_models_transformer_flux.py` as a reference.
122
76
123
-
### Common diffusers conventions
124
-
125
-
- Pipelines inherit from `DiffusionPipeline`
126
-
- Models use `ModelMixin` with `register_to_config` for config serialization
127
-
- Schedulers use `SchedulerMixin` with `ConfigMixin`
128
-
- Use `@torch.no_grad()` on pipeline `__call__`
129
-
- Support `output_type="latent"` for skipping VAE decode
130
-
- Support `generator` parameter for reproducibility
131
-
- Use `self.progress_bar(timesteps)` for progress tracking
132
-
133
-
## Gotchas
134
-
135
-
1.**Forgetting `__init__.py` lazy imports.** Every new class must be registered in the appropriate `__init__.py` with lazy imports. Missing this causes `ImportError` that only shows up when users try `from diffusers import YourNewClass`.
136
-
137
-
2.**Using `einops` or other non-PyTorch deps.** Reference implementations often use `einops.rearrange`. Always rewrite with native PyTorch (`reshape`, `permute`, `unflatten`). Don't add the dependency. If a dependency is truly unavoidable, guard its import: `if is_my_dependency_available(): import my_dependency`.
138
-
139
-
3.**Missing `make fix-copies` after `# Copied from`.** If you add `# Copied from` annotations, you must run `make fix-copies` to propagate them. CI will fail otherwise.
140
-
141
-
4.**Wrong `_supports_cache_class` / `_no_split_modules`.** These class attributes control KV cache and device placement. Copy from a similar model and verify -- wrong values cause silent correctness bugs or OOM errors.
142
-
143
-
5.**Missing `@torch.no_grad()` on pipeline `__call__`.** Forgetting this causes GPU OOM from gradient accumulation during inference.
144
-
145
-
6.**Config serialization gaps.** Every `__init__` parameter in a `ModelMixin` subclass must be captured by `register_to_config`. If you add a new param but forget to register it, `from_pretrained` will silently use the default instead of the saved value.
146
-
147
-
7.**Forgetting to update `_import_structure` and `_lazy_modules`.** The top-level `src/diffusers/__init__.py` has both -- missing either one causes partial import failures.
148
-
149
-
8.**Hardcoded dtype in model forward.** Don't hardcode `torch.float32` or `torch.bfloat16` in the model's forward pass. Use the dtype of the input tensors or `self.dtype` so the model works with any precision.
0 commit comments