Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
2043710
explain the API
alexlib Mar 15, 2026
17214bb
created doc page with explanation and differences in native vs compil…
alexlib Mar 15, 2026
8360ec5
Update docs workflow and build artifacts
alexlib Mar 15, 2026
d91c308
Potential fix for pull request finding
alexlib Mar 15, 2026
8fe2e68
major shift/rename/move
alexlib Mar 17, 2026
df01c30
Merge branch 'integrate_pyptv' of https://github.com/alexlib/openptv-…
alexlib Mar 17, 2026
10012a6
updating api
alexlib Mar 18, 2026
cbbac56
Update pyptv pointer
alexlib Mar 18, 2026
4d01bc4
Update pyptv pointer
alexlib Mar 18, 2026
df6bfb2
Update pyptv backend shim
alexlib Mar 18, 2026
17bbf5e
Flatten pyptv into subfolder
alexlib Mar 18, 2026
b7049b8
Normalize package layout and test tree
alexlib Mar 18, 2026
9431c95
updating api
alexlib Mar 18, 2026
a4a299c
updated api
alexlib Mar 18, 2026
86b13bf
Reorganize pyptv imports and test data layout
alexlib Mar 18, 2026
4c48ec6
Consolidate test fixtures under testing_folder
alexlib Mar 18, 2026
bf6a4dd
removed symlinks
alexlib Mar 18, 2026
e032909
moved all the test folders to one directory
alexlib Mar 18, 2026
bfa9e47
updated path to testing_folder
alexlib Mar 18, 2026
75faca5
updated path in tests/pyptv
alexlib Mar 18, 2026
47cd323
should remove temporary files after tests
alexlib Mar 18, 2026
412e332
updated optv installation by default
alexlib Mar 18, 2026
37b8fb0
added .backend
alexlib Mar 20, 2026
370f4f7
updated versioning system
alexlib Mar 20, 2026
7001a08
updated engine selection
alexlib Mar 20, 2026
dc82e19
updated calibration parameters, but still tests do not pass well
alexlib Mar 20, 2026
76d9d27
updated backend compatibility
alexlib Mar 20, 2026
c5bc2fd
updating compatability
alexlib Mar 20, 2026
5ecbf8d
improving compatibility
alexlib Mar 20, 2026
e12d029
updated targets in tests
alexlib Mar 20, 2026
8fa0adf
updated tests pass
alexlib Mar 20, 2026
a3de54b
improving compatibility
alexlib Mar 20, 2026
0eb6af8
tests should use testing_folder
alexlib Mar 20, 2026
3e4598d
fixed compatibility
alexlib Mar 20, 2026
2ce5206
added time test
alexlib Mar 20, 2026
facaeda
updated more of optv - see native_backend_unification.md
alexlib Mar 20, 2026
512b76c
adjusting to optv
alexlib Mar 20, 2026
dd87e1d
should align with optv
alexlib Mar 20, 2026
685a166
trying to fix
alexlib Mar 20, 2026
5da454c
fixing tests
alexlib Mar 20, 2026
a8d297c
more tests pass. working on optv
alexlib Mar 20, 2026
4998562
most tests pass
alexlib Mar 20, 2026
3772e1a
small %d bug
alexlib Mar 21, 2026
d8af22b
fixing tests
alexlib Mar 21, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ version.py
# Sphinx automatic generation of API
docs/README.md
docs/_api/
docs/_generated/

# Combined environments
ci/combined-environment-*.yml
Expand Down
9 changes: 8 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
PROJECT := openptv_python
CONDA := conda
CONDAFLAGS :=
UV := uv
UVFLAGS := --extra dev --upgrade
COV_REPORT := html
PYTHON ?= python

Expand All @@ -15,11 +17,16 @@ unit-tests:
type-check:
$(PYTHON) -m mypy .

env-update: uv-env-update

conda-env-update:
$(CONDA) install -y -c conda-forge conda-merge
$(CONDA) run conda-merge environment.yml ci/environment-ci.yml > ci/combined-environment-ci.yml
$(CONDA) env update $(CONDAFLAGS) -f ci/combined-environment-ci.yml

uv-env-update:
$(UV) sync $(UVFLAGS)

docker-build:
docker build -t $(PROJECT) .

Expand All @@ -30,6 +37,6 @@ template-update:
pre-commit run --all-files cruft -c .pre-commit-config-cruft.yaml

docs-build:
cp README.md docs/. && cd docs && rm -fr _api && make clean && make html
cp README.md docs/. && $(PYTHON) docs/render_native_stress_demo_include.py && cd docs && rm -fr _api && make clean && make html

# DO NOT EDIT ABOVE THIS LINE, ADD COMMANDS BELOW
15 changes: 13 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,17 +184,28 @@ OPENPTV_SKIP_STRESS_BENCHMARKS=1 uv run make

### Workflow for developers/contributors

For the best experience create a new conda environment (e.g. DEVELOP) with Python 3.12:
Recommended contributor workflow with `uv`:

```bash
uv venv
source .venv/bin/activate
make env-update
```

This keeps the local environment synced to the locked developer dependency set.

If you prefer the conda workflow instead:

```bash
conda create -n openptv-python -c conda-forge python=3.12
conda activate openptv-python
make conda-env-update
```

Before pushing to GitHub, use the developer install above and then run the
following commands:

1. Update conda environment: `make conda-env-update` or `uv venv` and `source .venv/bin/activate` followed by `uv sync --extra dev --upgrade`
1. Update the environment: `make env-update` by default, or `make conda-env-update` if you are using the conda workflow
1. If you are using pip instead of uv, install the editable developer environment: `pip install -e ".[dev]"`
1. Sync with the latest [template](https://github.com/ecmwf-projects/cookiecutter-conda-package) (optional): `make template-update`
1. Run quality assurance checks: `make qa`
Expand Down
84 changes: 84 additions & 0 deletions docs/_static/native-stress-demo.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
{
"generated_at": "2026-03-15T12:45:22.894671+00:00",
"python": "/home/user/Documents/GitHub/openptv-python/.venv/bin/python",
"command": [
"/home/user/Documents/GitHub/openptv-python/.venv/bin/python",
"-m",
"pytest",
"-q",
"-s",
"tests/test_native_stress_performance.py"
],
Comment on lines +2 to +11
"results": [
{
"key": "sequence",
"label": "Sequence params",
"python_label": "python",
"python_seconds": 2.8e-05,
"native_seconds": 1e-05,
"speedup": 2.86,
"legacy_seconds": null,
"compiled_vs_legacy": null
},
{
"key": "preprocess",
"label": "Preprocess image",
"python_label": "python",
"python_seconds": 2.190105,
"native_seconds": 0.004607,
"speedup": 475.37,
"legacy_seconds": null,
"compiled_vs_legacy": null
},
{
"key": "segmentation",
"label": "Segmentation",
"python_label": "python+numba",
"python_seconds": 0.005055,
"native_seconds": 0.001317,
"speedup": 3.84,
"legacy_seconds": null,
"compiled_vs_legacy": null
},
{
"key": "stereomatching",
"label": "Stereo matching",
"python_label": "python",
"python_seconds": 14.192568,
"native_seconds": 0.002493,
"speedup": 5693.37,
"legacy_seconds": null,
"compiled_vs_legacy": null
},
{
"key": "reconstruction",
"label": "Reconstruction",
"python_label": "compiled-python",
"python_seconds": 0.011528,
"native_seconds": 0.004852,
"speedup": 2.38,
"legacy_seconds": 0.649974,
"compiled_vs_legacy": 56.38
},
{
"key": "multilayer_reconstruction",
"label": "Multilayer reconstruction",
"python_label": "compiled-python",
"python_seconds": 0.009571,
"native_seconds": 0.004759,
"speedup": 2.01,
"legacy_seconds": 0.533542,
"compiled_vs_legacy": 55.74
},
{
"key": "tracking",
"label": "Tracking",
"python_label": "python",
"python_seconds": 1.300061,
"native_seconds": 0.144059,
"speedup": 9.02,
"legacy_seconds": null,
"compiled_vs_legacy": null
}
]
}
207 changes: 207 additions & 0 deletions docs/generate_native_stress_demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
"""Regenerate the native stress benchmark data assets for the docs.

This script reruns tests/test_native_stress_performance.py, parses the timing
summary lines printed by the test module, and writes machine-specific demo
artifacts under docs/_static/.
"""
Comment on lines +1 to +6

from __future__ import annotations

import argparse
import json
import re
import subprocess
import sys
from dataclasses import asdict, dataclass
from datetime import datetime, timezone
from pathlib import Path


@dataclass
class BenchmarkResult:
"""Structured benchmark summary for one workload."""

key: str
label: str
python_label: str
python_seconds: float
native_seconds: float
speedup: float
legacy_seconds: float | None = None
compiled_vs_legacy: float | None = None


LINE_PATTERNS = {
"sequence": re.compile(
r"^sequence stress benchmark: python: median=(?P<python>[0-9.]+)s .*?"
r"native: median=(?P<native>[0-9.]+)s .*?speedup=(?P<speedup>[0-9.]+)x"
),
"preprocess": re.compile(
r"^preprocess stress benchmark: python: median=(?P<python>[0-9.]+)s .*?"
r"native: median=(?P<native>[0-9.]+)s .*?speedup=(?P<speedup>[0-9.]+)x"
),
"segmentation": re.compile(
r"^segmentation stress benchmark: python\+numba: median=(?P<python>[0-9.]+)s .*?"
r"native: median=(?P<native>[0-9.]+)s .*?speedup=(?P<speedup>[0-9.]+)x"
),
"stereomatching": re.compile(
r"^stereomatching stress benchmark: python: median=(?P<python>[0-9.]+)s .*?"
r"native: median=(?P<native>[0-9.]+)s .*?speedup=(?P<speedup>[0-9.]+)x"
),
"reconstruction": re.compile(
r"^reconstruction stress benchmark: compiled-python: median=(?P<python>[0-9.]+)s .*?"
r"legacy-python: median=(?P<legacy>[0-9.]+)s .*?"
r"native: median=(?P<native>[0-9.]+)s .*?"
r"compiled-vs-legacy=(?P<compiled_vs_legacy>[0-9.]+)x; "
r"compiled-vs-native=(?P<speedup>[0-9.]+)x"
),
"multilayer_reconstruction": re.compile(
r"^multilayer reconstruction stress benchmark: compiled-python: median=(?P<python>[0-9.]+)s .*?"
r"legacy-python: median=(?P<legacy>[0-9.]+)s .*?"
r"native: median=(?P<native>[0-9.]+)s .*?"
r"compiled-vs-legacy=(?P<compiled_vs_legacy>[0-9.]+)x; "
r"compiled-vs-native=(?P<speedup>[0-9.]+)x"
),
"tracking": re.compile(
r"^tracking stress benchmark: python: median=(?P<python>[0-9.]+)s .*?"
r"native: median=(?P<native>[0-9.]+)s .*?speedup=(?P<speedup>[0-9.]+)x"
),
}


WORKLOAD_METADATA = {
"sequence": ("Sequence params", "python"),
"preprocess": ("Preprocess image", "python"),
"segmentation": ("Segmentation", "python+numba"),
"stereomatching": ("Stereo matching", "python"),
"reconstruction": ("Reconstruction", "compiled-python"),
"multilayer_reconstruction": ("Multilayer reconstruction", "compiled-python"),
"tracking": ("Tracking", "python"),
}


def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--repo-root",
type=Path,
default=Path(__file__).resolve().parent.parent,
help="Repository root containing tests/test_native_stress_performance.py",
)
parser.add_argument(
"--json-output",
type=Path,
default=Path(__file__).resolve().parent / "_static" / "native-stress-demo.json",
help="Output path for the parsed benchmark JSON",
)
parser.add_argument(
"--log-output",
type=Path,
default=Path(__file__).resolve().parent / "_static" / "native-stress-demo.log",
help="Output path for the raw benchmark log",
)
parser.add_argument(
"--python",
default=sys.executable,
help="Python interpreter used to run pytest",
)
return parser.parse_args()


def run_benchmarks(repo_root: Path, python_executable: str) -> str:
command = [
python_executable,
"-m",
"pytest",
"-q",
"-s",
"tests/test_native_stress_performance.py",
]
completed = subprocess.run(
command,
cwd=repo_root,
capture_output=True,
text=True,
check=False,
)
output = completed.stdout + completed.stderr
if completed.returncode != 0:
raise RuntimeError(
"stress benchmark run failed with exit code "
f"{completed.returncode}\n\n{output}"
)
return output


def parse_benchmarks(output: str) -> list[BenchmarkResult]:
result_map: dict[str, BenchmarkResult] = {}
for raw_line in output.splitlines():
line = re.sub(r"\s+", " ", raw_line.strip())
line = line.lstrip(".").strip()
for key, pattern in LINE_PATTERNS.items():
match = pattern.match(line)
if match is None:
continue
label, python_label = WORKLOAD_METADATA[key]
groups = match.groupdict()
result_map[key] = BenchmarkResult(
key=key,
label=label,
python_label=python_label,
python_seconds=float(groups["python"]),
native_seconds=float(groups["native"]),
speedup=float(groups["speedup"]),
legacy_seconds=(
float(groups["legacy"]) if groups.get("legacy") else None
),
compiled_vs_legacy=(
float(groups["compiled_vs_legacy"])
if groups.get("compiled_vs_legacy")
else None
),
)
break

expected_order = list(WORKLOAD_METADATA)
missing = [key for key in expected_order if key not in result_map]
if missing:
raise RuntimeError(
"failed to parse benchmark summaries for: " + ", ".join(missing)
)
return [result_map[key] for key in expected_order]


def main() -> int:
args = parse_args()
repo_root = args.repo_root.resolve()
output = run_benchmarks(repo_root, args.python)
results = parse_benchmarks(output)
generated_at = datetime.now(timezone.utc)

args.json_output.parent.mkdir(parents=True, exist_ok=True)
args.json_output.write_text(
json.dumps(
{
"generated_at": generated_at.isoformat(),
"python": args.python,
"command": [
args.python,
"-m",
"pytest",
"-q",
"-s",
"tests/test_native_stress_performance.py",
],
"results": [asdict(result) for result in results],
},
indent=2,
)
+ "\n",
encoding="utf-8",
)
args.log_output.write_text(output, encoding="utf-8")
return 0


if __name__ == "__main__":
raise SystemExit(main())
1 change: 1 addition & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ version is:

README.md
bundle_adjustment.md
native_backend_unification.md
```

# Indices and tables
Expand Down
Loading
Loading