Skip to content

Commit 826ded7

Browse files
authored
Merge pull request #62 from AstroAI-Lab/all-contributors
update contributors and docs
2 parents bc55ae5 + 3037c0f commit 826ded7

3 files changed

Lines changed: 32 additions & 3 deletions

File tree

.all-contributorsrc

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,34 @@
4848
"talk",
4949
"mentoring"
5050
]
51+
},
52+
{
53+
"login": "lorenzobranca",
54+
"name": "Lorenzo Branca",
55+
"avatar_url": "https://avatars.githubusercontent.com/u/57775402?v=4",
56+
"profile": "https://github.com/lorenzobranca",
57+
"contributions": [
58+
"data",
59+
"ideas",
60+
"research",
61+
"test",
62+
"talk"
63+
]
64+
},
65+
{
66+
"login": "Immi000",
67+
"name": "Immanuel Sulzer",
68+
"avatar_url": "https://avatars.githubusercontent.com/u/100942429?v=4",
69+
"profile": "https://github.com/Immi000",
70+
"contributions": [
71+
"code",
72+
"content",
73+
"data",
74+
"design",
75+
"doc",
76+
"infra",
77+
"userTesting"
78+
]
5179
}
5280
]
5381
}

docs/source/guides/running-benchmarks/modalities.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Interpolation studies remove every _n_-th timestep, forcing the surrogate to rec
3737
align: center
3838
alt: Interpolation modality example
3939
---
40-
Interpolation MAE over time for several interval widths. Wider gaps create bigger spikes but also highlight which surrogates remain stable.
40+
Interpolation MAE over time for several interval widths. When the distance between time steps increases, one can often observe error spikes between them.
4141
```
4242

4343
## Extrapolation
@@ -61,11 +61,12 @@ Sparse training reduces the number of observations before fitting, emulating lim
6161
align: center
6262
alt: Sparse modality example
6363
---
64-
Down-sampling trajectories shows how MAE changes with fewer observations; FCNN tends to degrade earlier than the latent models.
64+
Reducing the numer of samples (sets of trajectories) shows how MAE changes with fewer observations, revealing how efficiently the model extracts information from samples, and equivalently informing whether more training data could help improve this surrogate's performance.
6565
```
6666

6767
## Batch scaling
6868

6969
Batch scaling sweeps different batch sizes and records how accuracy/timing behave. This is useful to identify sweet spots for throughput without impacting convergence too heavily. Combine the results with the `timing` evaluation to compare throughput across surrogates.
7070

7171
See the :doc:`configuration reference </reference/configuration>` for the exact YAML schema and defaults.
72+

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ CODES Benchmark
1212
* **Extend the Stack** — :doc:`guides/extending-benchmark` shows how to add datasets or surrogates without rewriting orchestration glue.
1313
* **API Reference** — :doc:`api-reference` explains how the generated package docs are organized and links to each module.
1414

15-
Looking for a bird’s-eye view first? Start with the **User Guide**. Already configuring experiments or integrating your own model? Skip ahead to the **API Reference**. Either way, the sidebar mirrors the sections below so you are one click away from the next step.
15+
Looking for a bird’s-eye view first? Start with the **User Guide**. Already configuring experiments or integrating your own model? Skip ahead to the **API Reference**. The sidebar mirrors the sections below so you are one click away from the next step.
1616

1717
.. toctree::
1818
:maxdepth: 2

0 commit comments

Comments
 (0)