Skip to content

Commit 4b41b5a

Browse files
authored
Add CONTRIBUTING.md (#85)
1 parent f988512 commit 4b41b5a

1 file changed

Lines changed: 80 additions & 0 deletions

File tree

CONTRIBUTING.md

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
## Contibuting to python-libipld
2+
3+
This project is a small, single-file wrapper around Rust crates like `cid`, `cbor4ii`, and `multibase`, exposing a Python API through `PyO3`. Despite its size, performance matters a lot.
4+
5+
The project uses `uv` package manager. Installing UV: https://docs.astral.sh/uv/getting-started/installation/
6+
7+
Commands for quick start:
8+
```shell
9+
# install deps
10+
uv sync --group all
11+
12+
# compile and install using maturin directly (faster and better for developing)
13+
uv run maturin develop
14+
15+
# compile and install using pip and maturin backend
16+
uv pip install -v -e .
17+
18+
# run all tests
19+
uv run pytest
20+
21+
# run the most important benchmarks
22+
uv run pytest . -m benchmark_main --benchmark-enable
23+
24+
# run lint and fmt
25+
cargo clippy && cargo fmt
26+
```
27+
28+
### Performance
29+
30+
Two key points:
31+
32+
1. Python-side benchmarks
33+
34+
We use `pytest-benchmark` and run all benchmarks from the Python side. `CodSpeed` is used in CI/CD, but it relies on CPU simulation. The best comparison is always on your local machine.
35+
36+
First, capture the baseline from the `main` branch. This records performance relative to your hardware:
37+
```shell
38+
# clone and checkout main branch
39+
uv pip install -v -e .
40+
# run the most important benchmarks
41+
uv run pytest . -m benchmark_main --benchmark-enable --benchmark-save=main
42+
```
43+
44+
Then, on your feature branch, run the same benchmarks but save under a different name (`--benchmark-save` argument)
45+
```shell
46+
# checkout your branch
47+
uv pip install -v -e .
48+
uv run pytest . -m benchmark_main --benchmark-enable --benchmark-save=your_feature
49+
```
50+
51+
Finally, compare results:
52+
```shell
53+
uv run pytest-benchmark compare --group-by="name"
54+
```
55+
56+
Notes:
57+
- Benchmark data is stored under `.benchmarks`.
58+
- You can delete old snapshots during local development.
59+
60+
2. Rust-side benchmarks
61+
62+
We also maintain Rust benchmarks, but they mainly exist for profiling and diagnosing performance issues. They work better with tools like flamegraph than when forced into a Python boundary. See the project's [Makefile](Makefile) for details.
63+
64+
### Testing
65+
66+
All tests target the Python-facing API, which is why the `pytest` directory exists.
67+
68+
Any segfaults or Rust panics **must** be handled safely and must never crash the Python interpreter. Every error must be catchable at the Python layer.
69+
70+
### Style
71+
72+
Use `cargo fmt` and `cargo clippy`. CI will block your PR if formatting or linting fails.
73+
74+
### Things to care about
75+
76+
This library is used in:
77+
- DAG-CBOR benchmarks for Python: https://github.com/DavidBuchanan314/dag-cbor-benchmark
78+
- DASL Testing: https://hyphacoop.github.io/dasl-testing/
79+
80+
Keep these in mind and consider running their test suites against your feature branch locally.

0 commit comments

Comments
 (0)