Skip to content

feat: Add pixel size estimation from registration#20

Open
Alpaca233 wants to merge 7 commits intoCephla-Lab:mainfrom
Alpaca233:pixel-size-estimation-upstream
Open

feat: Add pixel size estimation from registration#20
Alpaca233 wants to merge 7 commits intoCephla-Lab:mainfrom
Alpaca233:pixel-size-estimation-upstream

Conversation

@Alpaca233
Copy link
Copy Markdown
Collaborator

Summary

  • Add estimate_pixel_size() method to TileFusion that estimates true pixel size from registration results
  • Compare expected shifts (from stage positions) vs measured shifts (from cross-correlation)
  • Use median ratio across all pairs to filter outliers
  • Add "Use estimated pixel size" checkbox in GUI
  • Log estimated pixel size and deviation percentage after registration

How it works

For each registered tile pair:

  • Expected shift (pixels) = stage_distance / metadata_pixel_size
  • Measured shift (pixels) = cross-correlation result
  • Ratio = expected / measured

Estimated pixel size = metadata_pixel_size × median(ratios)

Test plan

  • Unit tests for estimate_pixel_size() (3 tests)
  • All tests pass (71/71)
  • Linting passes
  • Manual testing with real dataset

🤖 Generated with Claude Code

Alpaca233 and others added 7 commits February 20, 2026 00:39
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix silent failure in PreviewWorker: log error instead of pass
- Add bounds check for expected shifts in ratio calculation
- Add sanity check before applying estimated pixel size (within 50%)
- Remove unused self.estimated_pixel_size variable

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add docstring entries for registration_z and registration_t parameters
- Fix type hints: z_level and time_idx now Optional[int] instead of int
- Update read_zarr_tile and read_zarr_region to accept z_level and time_idx
- Zarr reads now respect z-level and timepoint selection instead of hardcoding

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Extract duplicated arr[np.newaxis] to single return statement in read_zarr_region
- Extract pixel size application logic to helper method _apply_estimated_pixel_size

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces an estimated pixel-size feature derived from registration results, and extends Zarr tile/region reads to support explicit Z-level and timepoint selection. It also exposes the new pixel-size estimation in the GUI via logging and an optional “Use estimated pixel size” checkbox.

Changes:

  • Add TileFusion.estimate_pixel_size() and unit tests covering pixel-size estimation behavior.
  • Extend Zarr I/O helpers to read specific z_level and time_idx (or fall back to max-projection for 3D).
  • Add GUI wiring to display the estimated pixel size and optionally apply it for stitching.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
tests/test_core_pixel_estimation.py Adds unit tests for pixel-size estimation.
src/tilefusion/io/zarr.py Adds z_level / time_idx support for Zarr tile and region reads.
src/tilefusion/core.py Adds estimate_pixel_size() and threads z/t parameters through internal Zarr reads.
gui/app.py Logs estimated pixel size after registration and adds a checkbox to optionally apply it.
scripts/view_in_napari.py Minor formatting-only change.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +837 to +866
ratios = []

for (i, j), (dy_measured, dx_measured, score) in self.pairwise_metrics.items():
# Get stage positions
pos_i = np.array(self._tile_positions[i])
pos_j = np.array(self._tile_positions[j])

# Expected shift in pixels = stage_distance / pixel_size
stage_diff = pos_j - pos_i # (dy, dx) in physical units
expected_dy = stage_diff[0] / self._pixel_size[0]
expected_dx = stage_diff[1] / self._pixel_size[1]

# Compute ratio for non-zero shifts (both expected and measured must be significant)
if abs(dx_measured) > 5 and abs(expected_dx) > 5: # Horizontal shift
ratio = expected_dx / dx_measured
ratios.append(ratio)
if abs(dy_measured) > 5 and abs(expected_dy) > 5: # Vertical shift
ratio = expected_dy / dy_measured
ratios.append(ratio)

if not ratios:
raise ValueError("No valid shift measurements for pixel size estimation.")

# Use median to filter outliers
median_ratio = float(np.median(ratios))

# Estimated pixel size (assume isotropic)
estimated = self._pixel_size[0] * median_ratio
deviation_percent = (median_ratio - 1.0) * 100.0

Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

estimate_pixel_size() appears to treat pairwise_metrics[(i,j)] = (dy, dx, score) as the absolute inter-tile shift, but those values are the residual shifts measured between overlap patches after the expected stage-based offset has already been applied (see compute_pair_bounds() usage in refine_tile_positions_with_cross_correlation). As a result, dividing the full stage-based expected_dx/expected_dy by these residuals will generally produce nonsensical ratios (often huge or sign-flipped), and the returned pixel size will be incorrect.

Consider reconstructing the total measured shift in pixels as the stage-predicted integer offset (same rounding as find_adjacent_pairs: np.round(stage_diff / pixel_size).astype(int)) plus the measured residual (dx_measured, dy_measured), then compute pixel size from abs(stage_diff_axis) / abs(total_measured_shift_axis), and take a robust statistic (median) across pairs/axes. Also ensure sign conventions can’t produce negative ratios (use magnitudes or otherwise normalize signs).

Suggested change
ratios = []
for (i, j), (dy_measured, dx_measured, score) in self.pairwise_metrics.items():
# Get stage positions
pos_i = np.array(self._tile_positions[i])
pos_j = np.array(self._tile_positions[j])
# Expected shift in pixels = stage_distance / pixel_size
stage_diff = pos_j - pos_i # (dy, dx) in physical units
expected_dy = stage_diff[0] / self._pixel_size[0]
expected_dx = stage_diff[1] / self._pixel_size[1]
# Compute ratio for non-zero shifts (both expected and measured must be significant)
if abs(dx_measured) > 5 and abs(expected_dx) > 5: # Horizontal shift
ratio = expected_dx / dx_measured
ratios.append(ratio)
if abs(dy_measured) > 5 and abs(expected_dy) > 5: # Vertical shift
ratio = expected_dy / dy_measured
ratios.append(ratio)
if not ratios:
raise ValueError("No valid shift measurements for pixel size estimation.")
# Use median to filter outliers
median_ratio = float(np.median(ratios))
# Estimated pixel size (assume isotropic)
estimated = self._pixel_size[0] * median_ratio
deviation_percent = (median_ratio - 1.0) * 100.0
estimates = []
for (i, j), (dy_measured, dx_measured, score) in self.pairwise_metrics.items():
# Get stage positions
pos_i = np.array(self._tile_positions[i])
pos_j = np.array(self._tile_positions[j])
# Physical stage displacement between tiles, in (dy, dx).
stage_diff = pos_j - pos_i
# Reconstruct the total measured shift in pixels as the stage-predicted
# integer offset plus the residual shift reported by registration.
expected_shift_px = np.round(stage_diff / np.array(self._pixel_size)).astype(int)
total_measured_dy = float(expected_shift_px[0] + dy_measured)
total_measured_dx = float(expected_shift_px[1] + dx_measured)
# Estimate pixel size per axis from magnitudes to avoid sign-convention issues.
if abs(total_measured_dx) > 5 and abs(stage_diff[1]) > 0:
estimates.append(abs(stage_diff[1]) / abs(total_measured_dx))
if abs(total_measured_dy) > 5 and abs(stage_diff[0]) > 0:
estimates.append(abs(stage_diff[0]) / abs(total_measured_dy))
if not estimates:
raise ValueError("No valid shift measurements for pixel size estimation.")
# Use median to filter outliers
estimated = float(np.median(estimates))
deviation_percent = ((estimated / self._pixel_size[0]) - 1.0) * 100.0

Copilot uses AI. Check for mistakes.
Comment on lines +863 to +866
# Estimated pixel size (assume isotropic)
estimated = self._pixel_size[0] * median_ratio
deviation_percent = (median_ratio - 1.0) * 100.0

Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation claims to estimate a single “pixel size” but uses only self._pixel_size[0] to compute estimated, while the expected shift uses both self._pixel_size[0] and [1]. If pixel_size is anisotropic (py != px), this will silently return a y-derived estimate and overwrite x semantics. Either (a) return separate (py, px) estimates, or (b) explicitly document/enforce isotropy and use a consistent scalar baseline (e.g., mean of (py, px) or require they match).

Copilot uses AI. Check for mistakes.
Comment on lines 99 to 105
def read_zarr_tile(
zarr_ts: ts.TensorStore,
tile_idx: int,
is_3d: bool = False,
z_level: int = None,
time_idx: int = 0,
) -> np.ndarray:
Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type hints use z_level: int = None (and similarly in read_zarr_region), which is not a valid int annotation and will confuse type checkers. Consider importing Optional and annotating as Optional[int] = None for z_level (and any other nullable parameters) to match the typing style used elsewhere (e.g., TileFusion._read_tile).

Copilot uses AI. Check for mistakes.
Comment on lines +377 to +379
ratio = estimated_px / tf._pixel_size[0]
if 0.5 < ratio < 2.0:
tf._pixel_size = (estimated_px, estimated_px)
Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_apply_estimated_pixel_size reads/writes tf._pixel_size directly. Since TileFusion exposes a pixel_size property with validation, this should use tf.pixel_size (both for reading the original and setting the new value) to avoid bypassing invariants and to keep GUI code resilient to internal refactors.

Suggested change
ratio = estimated_px / tf._pixel_size[0]
if 0.5 < ratio < 2.0:
tf._pixel_size = (estimated_px, estimated_px)
original_pixel_size = tf.pixel_size
ratio = estimated_px / original_pixel_size[0]
if 0.5 < ratio < 2.0:
tf.pixel_size = (estimated_px, estimated_px)

Copilot uses AI. Check for mistakes.
Comment on lines +24 to +52
def test_perfect_calibration(self):
"""When measured shifts match expected, deviation should be ~0%."""
tile_positions = [(0, 0), (0, 90), (90, 0), (90, 90)]
pixel_size = (1.0, 1.0)
pairwise_metrics = {
(0, 1): (0, 90, 0.95),
(0, 2): (90, 0, 0.95),
(1, 3): (90, 0, 0.95),
(2, 3): (0, 90, 0.95),
}

tf = self._create_mock_tilefusion(tile_positions, pixel_size, pairwise_metrics)
estimated, deviation = tf.estimate_pixel_size()

assert abs(estimated - 1.0) < 0.01
assert abs(deviation) < 1.0

def test_pixel_size_underestimated(self):
"""When metadata pixel size is too small, estimated should be larger."""
tile_positions = [(0, 0), (0, 90)]
pixel_size = (1.0, 1.0)
pairwise_metrics = {(0, 1): (0, 82, 0.95)}

tf = self._create_mock_tilefusion(tile_positions, pixel_size, pairwise_metrics)
estimated, deviation = tf.estimate_pixel_size()

assert 1.05 < estimated < 1.15
assert deviation > 5.0

Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests assume pairwise_metrics[(i,j)] contains the absolute stage displacement in pixels (e.g., 90px), but in the current registration pipeline pairwise_metrics stores the residual shift between overlap patches after applying the stage-predicted offset (so “perfect calibration” would typically yield shifts near 0). Once estimate_pixel_size() is corrected to account for this, these test fixtures/expectations will need to be updated accordingly (and should include a case that uses a non-zero residual derived from a known pixel-size mismatch).

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,58 @@
"""Tests for pixel size estimation."""

import numpy as np
Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy is imported as np but not used in this test module, which can fail linting/format checks depending on configuration. Remove the unused import if it’s not needed.

Suggested change
import numpy as np

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants