This file provides guidance to AI agents (including Claude Code, Cursor, and other LLM-powered tools) when working with code in this repository.
- ALL tests MUST pass for code to be considered complete and working
- Never describe code as "working as expected" if there are ANY failing tests
- Even if specific feature tests pass, failing tests elsewhere indicate broken functionality
- Changes that break existing tests must be fixed before considering implementation complete
- A successful implementation must pass linting, type checking, AND all existing tests
gp-libs is a Python library providing internal utilities and extensions for git-pull projects. It focuses on extending Sphinx documentation and pytest functionality with support for docutils-compatible markup formats.
Key features:
- doctest_docutils: Reimplementation of Python's doctest with support for reStructuredText and Markdown
- pytest_doctest_docutils: pytest plugin for running doctests in documentation files
- linkify_issues: Sphinx extension that converts issue references (e.g.,
#123) to hyperlinks - Supports testing doctest examples in
.rstand.mdfiles - Powers documentation testing across the git-pull ecosystem
This project uses:
- Python 3.10+
- just for command running (see also https://just.systems/)
- uv for dependency management
- ruff for linting and formatting
- mypy for type checking
- pytest for testing
- pytest-watcher for continuous testing
# Install dependencies
uv pip install --editable .
uv pip sync
# Install with development dependencies
uv pip install --editable . -G dev# Run all tests
just test
# or directly with pytest
uv run pytest
# Run a single test file
uv run pytest tests/test_doctest_docutils.py
# Run a specific test
uv run pytest tests/test_doctest_docutils.py::test_function_name
# Run tests with test watcher
just start
# or
uv run ptw .
# Run tests with doctests
uv run ptw . --now --doctest-modules# Run ruff for linting
just ruff
# or directly
uv run ruff check .
# Format code with ruff
just ruff-format
# or directly
uv run ruff format .
# Run ruff linting with auto-fixes
uv run ruff check . --fix --show-fixes
# Run mypy for type checking
just mypy
# or directly
uv run mypy src tests
# Watch mode for linting (using entr)
just watch-ruff
just watch-mypyFollow this workflow for code changes (see .cursor/rules/dev-loop.mdc):
- Format First:
uv run ruff format . - Run Tests:
uv run pytest - Run Linting:
uv run ruff check . --fix --show-fixes - Check Types:
uv run mypy - Verify Tests Again:
uv run pytest
# Build documentation
just build-docs
# Start documentation server with auto-reload
just start-docs
# Update documentation CSS/JS
just design-docsgp-libs provides utilities for documentation testing and Sphinx extensions:
src/
├── doctest_docutils.py # Core doctest reimplementation
├── pytest_doctest_docutils.py # pytest plugin
├── linkify_issues.py # Sphinx extension
├── docutils_compat.py # Compatibility layer
└── gp_libs.py # Package metadata
-
doctest_docutils (
src/doctest_docutils.py)- Reimplementation of Python's standard library
doctestmodule - Supports docutils-compatible markup (reStructuredText and Markdown)
- Handles
doctest_block,.. doctest::directive, and```{doctest}code blocks - PEP-440 version specifier support for conditional tests
- Can be run directly:
python -m doctest_docutils README.md -v
- Reimplementation of Python's standard library
-
pytest_doctest_docutils (
src/pytest_doctest_docutils.py)- pytest plugin integrating doctest_docutils with pytest
- Collects and runs doctests from
.rstand.mdfiles - Full pytest fixture and conftest.py support
- Registered as
pytest11entry point
-
linkify_issues (
src/linkify_issues.py)- Sphinx extension for automatic issue linking
- Converts
#123references to clickable hyperlinks - Configured via
issue_url_tplin Sphinx conf.py
-
docutils_compat (
src/docutils_compat.py)- Compatibility layer for cross-version docutils support
- Provides
findall()abstraction for different docutils versions
-
gp_libs (
src/gp_libs.py)- Package metadata (version, title, author, URLs)
gp-libs uses pytest for testing with custom fixtures. The test suite includes:
- Unit tests for doctest parsing and execution
- Integration tests for pytest plugin functionality
- Sphinx app factory for testing extensions
tests/
├── test_doctest_docutils.py # Tests for doctest module
├── test_pytest_doctest_docutils.py # Tests for pytest plugin
├── test_linkify_issues.py # Tests for linkify extension
├── conftest.py # Fixtures and sphinx app factory
└── regressions/ # Regression tests
-
Use existing fixtures over mocks (see
.cursor/rules/dev-loop.mdc)- Use fixtures from conftest.py instead of
monkeypatchandMagicMockwhen available - Document in test docstrings why standard fixtures weren't used for exceptional cases
- Use fixtures from conftest.py instead of
-
Preferred pytest patterns
- Use
tmp_path(pathlib.Path) fixture over Python'stempfile - Use
monkeypatchfixture overunittest.mock
- Use
-
Running tests continuously
- Use pytest-watcher during development:
uv run ptw . - For doctests:
uv run ptw . --now --doctest-modules
- Use pytest-watcher during development:
For detailed coding standards, refer to .cursor/rules/dev-loop.mdc. Key highlights:
- Use namespace imports for stdlib:
import enuminstead offrom enum import Enum; third-party packages may usefrom X import Y - For typing, use
import typing as tand access via namespace:t.NamedTuple, etc. - Use
from __future__ import annotationsat the top of all Python files
Follow NumPy docstring style for all functions and methods (see .cursor/rules/dev-loop.mdc):
"""Short description of the function or class.
Detailed description using reStructuredText format.
Parameters
----------
param1 : type
Description of param1
param2 : type
Description of param2
Returns
-------
type
Description of return value
"""All functions and methods MUST have working doctests. Doctests serve as both documentation and tests.
CRITICAL RULES:
- Doctests MUST actually execute - never comment out function call
asyncio.run()or similar calls or use placeholder output - Doctests MUST NOT be converted to
.. code-block::as a workaround (code-blocks don't run) - If you cannot create a working doctest, STOP and ask for help
Available tools for doctests:
doctest_namespacefixtures:tmp_path(add more viaconftest.py)- Ellipsis for variable output:
# doctest: +ELLIPSIS - PEP-440 version specifiers via
is_allowed_version()for version-conditional tests
# doctest: +SKIP is NOT permitted - it's just another workaround that doesn't test anything. Use the fixtures and ellipsis patterns properly.
Simple doctest example:
>>> is_allowed_version('3.3', '<=3.5')
True
>>> is_allowed_version('3.3', '>3.2, <4.0')
True
**Async doctest pattern (top-level await):**
```python
>>> import asyncio
>>> await asyncio.sleep(0) # Top-level await works directly
>>> async def example():
... return 42
>>> await example()
42Using fixtures in doctests:
>>> from pathlib import Path
>>> doc_path = tmp_path / "example.rst" # tmp_path from doctest_namespace
>>> doc_path.write_text(">>> 1 + 1\\n2")
...When output varies, use ellipsis:
>>> parse_document(content) # doctest: +ELLIPSIS
<docutils.nodes.document ...>Additional guidelines:
- Use narrative descriptions for test sections rather than inline comments
- Move complex examples to dedicated test files at
tests/examples/<path_to_module>/test_<example>.py - Keep doctests simple and focused on demonstrating usage
- Add blank lines between test sections for improved readability
See .cursor/rules/git-commits.mdc for detailed commit message standards.
Format commit messages as:
Component/File(commit-type[Subcomponent/method]): Concise description
why: Explanation of necessity or impact.
what:
- Specific technical changes made
- Focused on a single topic
Common commit types:
- feat: New features or enhancements
- fix: Bug fixes
- refactor: Code restructuring without functional change
- docs: Documentation updates
- chore: Maintenance (dependencies, tooling, config)
- test: Test-related updates
- style: Code style and formatting
- py(deps): Dependencies
- py(deps[dev]): Dev Dependencies
- ai(rules[LLM type]): AI Rule Updates
Example:
doctest_docutils(feat[parse]): Add support for myst-parser code blocks
why: Enable doctest execution in Markdown documentation files
what:
- Add detection for ```{doctest} fence syntax
- Register myst directives automatically
- Add tests for Markdown doctest parsing
For multi-line commits, use heredoc to preserve formatting:
git commit -m "$(cat <<'EOF'
feat(Component[method]) add feature description
why: Explanation of the change.
what:
- First change
- Second change
EOF
)"See .cursor/rules/avoid-debug-loops.mdc for detailed debugging guidance.
When stuck in debugging loops:
- Pause and acknowledge the loop
- Minimize to MVP: Remove all debugging cruft and experimental code
- Document the issue comprehensively for a fresh approach
- Format for portability (using quadruple backticks)
Always use communicate() for subprocess I/O:
proc = await asyncio.create_subprocess_shell(...)
stdout, stderr = await proc.communicate() # Prevents deadlocksUse asyncio.timeout() for timeouts:
async with asyncio.timeout(300):
stdout, stderr = await proc.communicate()Handle BrokenPipeError gracefully:
try:
proc.stdin.write(data)
await proc.stdin.drain()
except BrokenPipeError:
pass # Process already exited - expected behavior- Class naming: Use
Asyncprefix:AsyncDocTestRunner - Callbacks: Async APIs accept only async callbacks (no union types)
- Shared logic: Extract argument-building to sync functions, share with async
# Shared argument building (sync)
def build_test_args(verbose: bool = False) -> dict[str, t.Any]:
args = {"verbose": verbose}
return args
# Async method uses shared logic
async def run_tests(self, verbose: bool = False) -> TestResults:
args = build_test_args(verbose)
return await self._run(**args)pytest configuration:
[tool.pytest.ini_options]
asyncio_mode = "strict"
asyncio_default_fixture_loop_scope = "function"Async fixture pattern:
@pytest_asyncio.fixture(loop_scope="function")
async def async_doc_runner(tmp_path: Path) -> t.AsyncGenerator[AsyncDocTestRunner, None]:
runner = AsyncDocTestRunner(path=tmp_path)
yield runnerParametrized async tests:
class DocTestFixture(t.NamedTuple):
test_id: str
doc_content: str
expected: list[str]
DOC_FIXTURES = [
DocTestFixture("basic", ">>> 1 + 1\n2", ["pass"]),
DocTestFixture("failure", ">>> 1 + 1\n3", ["fail"]),
]
@pytest.mark.parametrize(
list(DocTestFixture._fields),
DOC_FIXTURES,
ids=[f.test_id for f in DOC_FIXTURES],
)
@pytest.mark.asyncio
async def test_doctest(test_id: str, doc_content: str, expected: list) -> None:
...DON'T poll returncode:
# WRONG
while proc.returncode is None:
await asyncio.sleep(0.1)
# RIGHT
await proc.wait()DON'T mix blocking calls in async code:
# WRONG
async def bad():
subprocess.run(["python", "-m", "doctest", file]) # Blocks event loop!
# RIGHT
async def good():
proc = await asyncio.create_subprocess_shell(...)
await proc.wait()DON'T close the event loop in tests:
# WRONG - breaks pytest-asyncio cleanup
loop = asyncio.get_running_loop()
loop.close()- Use
_ensure_directives_registered()to auto-register required directives - Supports myst-parser directives (
{doctest},{tab}) - Handles both reStructuredText and Markdown syntax
- Uses docutils for parsing
.rstfiles - Uses myst-parser for parsing
.mdfiles - Both formats support doctest blocks
In your Sphinx conf.py:
extensions = ["linkify_issues"]
issue_url_tpl = 'https://github.com/git-pull/gp-libs/issues/{issue_id}'- Documentation: https://gp-libs.git-pull.com/
- GitHub: https://github.com/git-pull/gp-libs
- PyPI: https://pypi.org/project/gp-libs/