Skip to content

Commit 7e25f76

Browse files
Merge pull request #55 from AstroAI-Lab/all-contributors
Add contributors section and update README with All Contributors badge
2 parents 3ff151a + 64b8d36 commit 7e25f76

3 files changed

Lines changed: 125 additions & 16 deletions

File tree

.all-contributorsrc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{
2+
"projectName": "CODES-Benchmark",
3+
"projectOwner": "AstroAI-Lab"
4+
}

CONTRIBUTING.md

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# Contributing to CODES Benchmark
2+
3+
Thanks for helping improve CODES Benchmark. Contributions of all sizes are welcome, including bug fixes, new features, documentation updates, and benchmark extensions.
4+
5+
## What to contribute
6+
7+
Common contribution types in this repository are:
8+
9+
- Adding a new dataset
10+
- Adding a new surrogate model
11+
- Adding or improving benchmark evaluations and modalities
12+
13+
For extension-specific implementation details, follow the dedicated guide:
14+
15+
- https://astroai-lab.de/CODES-Benchmark/guides/extending-benchmark.html
16+
17+
## Development setup
18+
19+
Use either `uv` (recommended) or a standard `venv` + `pip` setup.
20+
21+
### Option A: uv (recommended)
22+
23+
```bash
24+
git clone https://github.com/robin-janssen/CODES-Benchmark.git
25+
cd CODES-Benchmark
26+
uv sync
27+
source .venv/bin/activate
28+
uv pip install --group dev
29+
```
30+
31+
### Option B: venv + pip
32+
33+
```bash
34+
git clone https://github.com/robin-janssen/CODES-Benchmark.git
35+
cd CODES-Benchmark
36+
python -m venv .venv
37+
source .venv/bin/activate
38+
pip install -e .
39+
pip install -r requirements.txt
40+
pip install black isort pytest sphinx
41+
```
42+
43+
## Code style and formatting
44+
45+
This repository uses:
46+
47+
- [Black](https://black.readthedocs.io/en/stable/) for formatting
48+
- [isort](https://pycqa.github.io/isort/) for import sorting
49+
50+
The CI pipeline auto-applies both tools on contributions. Running them locally is still recommended so you get fast feedback before opening or updating a pull request.
51+
52+
Run both locally with:
53+
54+
```bash
55+
black .
56+
isort .
57+
```
58+
59+
If you use pre-commit, install hooks once and run them across the repository:
60+
61+
```bash
62+
pre-commit install
63+
pre-commit run --all-files
64+
```
65+
66+
## Tests and documentation checks
67+
68+
Before submitting a PR, run:
69+
70+
```bash
71+
pytest
72+
sphinx-build -b html docs/source docs/_build/html
73+
```
74+
75+
If your change affects user-facing behavior, update the relevant docs in `docs/source/`.
76+
77+
## Standard contribution workflow
78+
79+
1. Create an issue (bug report, feature request, or extension proposal), or choose an open issue to work on.
80+
2. Create a branch from `main` with a clear name.
81+
3. Implement your changes and include tests/docs updates where relevant.
82+
4. Run formatting, tests, and docs build locally.
83+
5. Open a pull request and link the issue.
84+
6. Address review feedback and keep your branch up to date.
85+
86+
## Pull request expectations
87+
88+
Please keep PRs focused and easy to review.
89+
90+
- Explain what changed and why.
91+
- Link related issues.
92+
- Include tests for behavior changes.
93+
- Include documentation updates for new datasets, surrogates, modalities, or config options.
94+
95+
## Need help?
96+
97+
If you are unsure where to start, open an issue with your proposal or question.

README.md

Lines changed: 24 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,13 @@
11
# CODES Benchmark
22

33
[![codecov](https://codecov.io/github/robin-janssen/CODES-Benchmark/branch/main/graph/badge.svg?token=TNF9ISCAJK)](https://codecov.io/github/robin-janssen/CODES-Benchmark) ![Static Badge](https://img.shields.io/badge/license-GPLv3-blue) ![Static Badge](https://img.shields.io/badge/NeurIPS-2024-green)
4+
[![All Contributors](https://img.shields.io/github/all-contributors/AstroAI-Lab/CODES-Benchmark?color=ee8449&style=flat-square)](#contributors)
5+
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
6+
[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)
47

58
🎉 Accepted to the ML4PS workshop @ NeurIPS 2024
69

7-
Benchmark coupled ODE surrogate models on curated datasets with reproducible training, evaluation, and visualization pipelines. CODES helps you answer: *Which surrogate architecture fits my data, accuracy target, and runtime budget?*
10+
Benchmark coupled ODE surrogate models on curated datasets with reproducible training, evaluation, and visualization pipelines. CODES helps you answer: _Which surrogate architecture fits my data, accuracy target, and runtime budget?_
811

912
## What you get
1013

@@ -50,23 +53,28 @@ The GitHub Pages site now hosts the narrative guides, configuration reference, a
5053

5154
## Repository map
5255

53-
| Path | Purpose |
54-
| --- | --- |
55-
| `configs/` | Ready-to-edit benchmark configs (`train_eval/`, `tuning/`, etc.) |
56-
| `datasets/` | Bundled datasets + download helper (`data_sources.yaml`) |
57-
| `codes/` | Python package with surrogates, training, tuning, and benchmarking utilities |
58-
| `run_training.py`, `run_eval.py`, `run_tuning.py` | CLI entry points for the main workflows |
59-
| `docs/` | Sphinx project powering the GitHub Pages site (guides, tutorials, API reference) |
60-
| `scripts/` | Convenience tooling (dataset downloads, analysis utilities) |
56+
| Path | Purpose |
57+
| ------------------------------------------------- | -------------------------------------------------------------------------------- |
58+
| `configs/` | Ready-to-edit benchmark configs (`train_eval/`, `tuning/`, etc.) |
59+
| `datasets/` | Bundled datasets + download helper (`data_sources.yaml`) |
60+
| `codes/` | Python package with surrogates, training, tuning, and benchmarking utilities |
61+
| `run_training.py`, `run_eval.py`, `run_tuning.py` | CLI entry points for the main workflows |
62+
| `docs/` | Sphinx project powering the GitHub Pages site (guides, tutorials, API reference) |
63+
| `scripts/` | Convenience tooling (dataset downloads, analysis utilities) |
6164

6265
## Contributing
6366

64-
Pull requests are welcome! Please include documentation updates, add or update tests when you touch executable code, and run:
67+
Contribution guidelines are documented in [CONTRIBUTING.md](CONTRIBUTING.md).
6568

66-
```bash
67-
uv pip install --group dev
68-
pytest
69-
sphinx-build -b html docs/source/ docs/_build/html
70-
```
69+
In short: open or pick an issue, make your changes in a branch, and submit a pull request with tests/docs updates as needed.
70+
71+
## Contributors
72+
73+
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
74+
<!-- prettier-ignore-start -->
75+
<!-- markdownlint-disable -->
76+
77+
<!-- markdownlint-restore -->
78+
<!-- prettier-ignore-end -->
7179

72-
If you publish a new surrogate or dataset, document it under `docs/guides` / `docs/reference` so users can adopt it quickly. For questions, open an issue on GitHub.
80+
<!-- ALL-CONTRIBUTORS-LIST:END -->

0 commit comments

Comments
 (0)