This repository contains the physically calibrated pseudo-noisy generator used in the thesis to support ESPI denoising experiments under data scarcity. Its role in the broader workflow is to generate synthetic supervision that is closer to the real single-shot acquisition regime, helping reduce the synthetic-real gap during denoising training.
The repository should be understood as the public pseudo-noisy generation component of the thesis, not as a standalone end-to-end solution for denoising or classification.
Within the full thesis workflow, this repository supports the denoising stage by providing a calibrated synthetic-noise model when matched real training pairs are limited or incomplete.
Its purpose is to:
- generate pseudo-noisy ESPI samples from cleaner reference images,
- support denoising model development under limited real supervision,
- study how synthetic supervision behaves relative to real-aligned supervision,
- document the calibration and development history behind the pseudo-noisy generation process.
It should not be interpreted as a general-purpose augmentation toolkit, and it should not be read as implying that this repository alone delivers the final downstream classification performance reported in the thesis.
The generator is built around a physically motivated cascade designed to approximate ESPI acquisition noise:
- Multiplicative speckle (Gamma) as the primary coherent-noise component
- Poisson shot noise to reflect photon-counting effects
- Gaussian floor noise to approximate electronic noise and quantization
On top of this baseline cascade, the repository includes calibration and realism-oriented mechanisms such as:
- calibration from real single-shot vs averaged reference pairs,
- specimen-aware and material-aware parameter defaults,
- frequency- and amplitude-dependent motion blur,
- optional spatial variation and temporal correlation,
- matched pair generation for denoising experiments.
-
make_pseudo_noisy_plus.pyMain calibrated generator with the full thesis-oriented noise cascade and calibration options. -
make_pseudo_noisy_matched.pyUtility for generating matched clean/noisy pairs for denoising workflows. -
generate_pseudo_noisy.pyLightweight batch-style wrapper for pseudo-noisy generation. -
make_pseudo_noisy_v3.pyEarlier generator variant retained for historical development context.
pip install -r requirements.txtpython make_pseudo_noisy_plus.py \
--clean-dir /path/to/clean/images \
--out-dir /path/to/output \
--material wood \
--frequency 180 \
--amplitude 0.5The thesis conclusion is regime-dependent, not generator-only:
- physically calibrated pseudo-noisy supervision is useful when real denoising supervision is scarce,
- reducing the synthetic-real gap matters more than simply increasing synthetic quantity,
- downstream benefit depends on how closely the generated supervision matches the real acquisition regime,
- the final thesis conclusions about denoising and downstream classification must be interpreted together with the separate denoising and classification repositories.
This repository currently contains the public generator scripts and thesis-supporting notes in the repository root:
README.mdRESEARCH_SUMMARY.mdCOMPREHENSIVE_ESPI_PSEUDONOISY_DATA.mdmake_pseudo_noisy_plus.pymake_pseudo_noisy_matched.pygenerate_pseudo_noisy.pymake_pseudo_noisy_v3.pyrequirements.txtCITATION.cff
The thesis codebase is split across three public code components:
- Pseudo-noisy generation (this repository) (
https://github.com/GeorgeSpy/ESPI-pseydonoisy-generator) - DnCNN-ECA denoising (
https://github.com/GeorgeSpy/ESPI-DnCNN-ECA) - Classification and evaluation (
https://github.com/GeorgeSpy/espi-classification-models_2)
If you use this repository, please cite the software metadata in CITATION.cff. A repository-level BibTeX example is:
@software{spyridakis2025espi_pseudonoisy,
title = {ESPI-PseudoNoisy: Physically Calibrated Pseudo-Noisy Generation for ESPI},
author = {Spyridakis, Georgios},
year = {2025},
url = {https://github.com/GeorgeSpy/ESPI-pseydonoisy-generator}
}MIT License. See LICENSE for details.