Physics World Model — Modality Catalog
170 imaging modalities with descriptions, experimental setups, and reconstruction guidance.
3D Gaussian Splatting
3D Gaussian splatting represents scenes as a collection of learnable 3D Gaussian primitives, each parameterized by position, covariance (anisotropic 3D extent), opacity, and spherical harmonic color coefficients. Rendering rasterizes the Gaussians by projecting them to 2D screen space, sorting by depth, and alpha-compositing with a tile-based differentiable rasterizer. Training optimizes Gaussian parameters via gradient descent with adaptive density control (splitting, cloning, pruning). This achieves real-time (30+ fps) rendering at quality comparable to NeRF, from SfM point cloud initialization (COLMAP).
3D Gaussian Splatting
Description
3D Gaussian splatting represents scenes as a collection of learnable 3D Gaussian primitives, each parameterized by position, covariance (anisotropic 3D extent), opacity, and spherical harmonic color coefficients. Rendering rasterizes the Gaussians by projecting them to 2D screen space, sorting by depth, and alpha-compositing with a tile-based differentiable rasterizer. Training optimizes Gaussian parameters via gradient descent with adaptive density control (splitting, cloning, pruning). This achieves real-time (30+ fps) rendering at quality comparable to NeRF, from SfM point cloud initialization (COLMAP).
Principle
3-D Gaussian Splatting represents a scene as a set of anisotropic 3-D Gaussians, each with position, covariance, opacity, and spherical harmonics color coefficients. Novel views are rendered by projecting (splatting) these Gaussians onto the image plane and alpha-compositing them in depth order. Unlike NeRF, rendering is rasterization-based and achieves real-time frame rates (≥100 fps) with high visual quality.
How to Build the System
Start with the same multi-view image dataset as NeRF (50-200 posed images via COLMAP). Initialize 3-D Gaussians from the SfM point cloud. Train by differentiable rasterization: project Gaussians to each training view, compute photometric loss (L1 + SSIM), and optimize positions, covariances, colors, and opacities via Adam. Adaptive densification (splitting/cloning Gaussians) and pruning runs periodically during training. Training takes ~15-30 minutes on a modern GPU.
Common Reconstruction Algorithms
- 3D Gaussian Splatting (original, Kerbl et al. 2023)
- Mip-Splatting (anti-aliased multi-scale Gaussian splatting)
- SuGaR (Surface-Aligned Gaussian Splatting for mesh extraction)
- Dynamic 3D Gaussians (for dynamic scenes / video)
- Compact-3DGS (compressed Gaussian representations)
Common Mistakes
- Insufficient initial SfM points causing sparse reconstruction
- Too few training views creating holes or floater artifacts in novel views
- Excessive Gaussian count (millions) consuming too much GPU memory
- Not using adaptive densification, leaving under-reconstructed regions
- Ignoring exposure variation between training images
How to Avoid Mistakes
- Use dense SfM initialization; increase COLMAP matching thoroughness if sparse
- Capture more views, especially in regions that are under-represented
- Apply periodic pruning of low-opacity Gaussians to control memory
- Enable adaptive densification and set proper gradient thresholds for splitting
- Apply per-image exposure compensation or normalize images before training
Forward-Model Mismatch Cases
- The widefield fallback processes a single 2D (64,64) image, but Gaussian splatting renders multi-view images from a set of 3D Gaussian primitives — output shape (n_views, H, W) encodes view-dependent appearance
- Gaussian splatting is a nonlinear rendering process (alpha-compositing of projected 3D Gaussians sorted by depth) — the widefield linear blur cannot model 3D-to-2D projection, depth ordering, or view-dependent effects
How to Correct the Mismatch
- Use the Gaussian splatting operator that projects 3D Gaussian primitives onto each camera plane via differentiable rasterization with alpha compositing
- Optimize Gaussian parameters (position, covariance, opacity, color SH coefficients) to minimize rendering loss across training views using the correct splatting forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Kerbl et al., '3D Gaussian Splatting for Real-Time Radiance Field Rendering', SIGGRAPH 2023
Canonical Datasets
- Mip-NeRF 360 (9 scenes)
- Tanks & Temples (Knapitsch et al.)
- Deep Blending (Hedman et al.)
4D-STEM Electron Diffraction
4D-STEM acquires a full 2D convergent-beam electron diffraction (CBED) pattern at each probe position during a 2D STEM scan, yielding a 4D dataset (2 real-space + 2 reciprocal-space dimensions). This enables simultaneous mapping of strain, orientation, electric fields, and thickness with nanometer spatial resolution. Phase retrieval from the 4D dataset (electron ptychography) can achieve sub-angstrom resolution. High data rates (>1 GB/s) from fast pixelated detectors create computational challenges.
4D-STEM Electron Diffraction
Description
4D-STEM acquires a full 2D convergent-beam electron diffraction (CBED) pattern at each probe position during a 2D STEM scan, yielding a 4D dataset (2 real-space + 2 reciprocal-space dimensions). This enables simultaneous mapping of strain, orientation, electric fields, and thickness with nanometer spatial resolution. Phase retrieval from the 4D dataset (electron ptychography) can achieve sub-angstrom resolution. High data rates (>1 GB/s) from fast pixelated detectors create computational challenges.
Principle
4D-STEM electron diffraction scans a convergent electron beam across the specimen and records a full 2-D diffraction pattern (convergent beam electron diffraction, CBED) at each scan position. The resulting 4-D dataset (2-D scan × 2-D diffraction) enables mapping of crystal structure, orientation, strain, electric fields, and charge density with nanometer spatial resolution.
How to Build the System
Use a STEM equipped with a fast pixelated detector (Medipix3, EMPAD, or Dectris ARINA) capable of recording diffraction patterns at >1000 fps. Set a small convergence semi-angle (1-5 mrad) for nanobeam diffraction or large (20-30 mrad) for CBED. The scan step should be comparable to the probe size. Data volumes are large (tens of GB per scan), requiring efficient data pipeline and storage.
Common Reconstruction Algorithms
- Virtual detector imaging (synthesized BF, DF, iDPC from 4D data)
- Center-of-mass (COM) analysis for electric field mapping
- Ptychographic reconstruction from 4D-STEM data
- Orientation mapping (template matching against simulated patterns)
- Strain mapping via disk position analysis
Common Mistakes
- Detector dynamic range insufficient for simultaneous central beam and weak diffraction
- Scan step too large relative to probe size, under-sampling the specimen
- Not accounting for specimen thickness variation in diffraction pattern interpretation
- Excessive electron dose for beam-sensitive materials (organics, 2D materials)
- Misindexing diffraction patterns due to double diffraction or overlapping grains
How to Avoid Mistakes
- Use counting-mode detectors (Medipix) with high dynamic range or electron counting
- Match scan step to probe size for complete spatial sampling
- Simulate diffraction patterns at the measured thickness for accurate interpretation
- Use low-dose 4D-STEM protocols with fast detectors to minimize beam damage
- Carefully index patterns considering multiple scattering; compare with simulations
Forward-Model Mismatch Cases
- The widefield fallback produces a real-space blurred image, but electron diffraction records the far-field diffraction pattern (reciprocal space) — Bragg spots encode crystal structure, lattice spacings, and symmetry, which bear no resemblance to a blurred image
- The diffraction pattern intensity I(k) = |F{V(r) * P(r)}|^2 encodes the Fourier transform of the projected crystal potential — the widefield real-space blur cannot access reciprocal-space crystallographic information
How to Correct the Mismatch
- Use the electron diffraction operator that models kinematic or dynamical scattering from the crystal lattice, producing far-field diffraction patterns with Bragg peaks at reciprocal lattice positions
- Index diffraction patterns to determine crystal structure and orientation; use dynamical simulation (Bloch wave or multislice) for accurate intensity matching and structure refinement
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Ophus, 'Four-dimensional scanning transmission electron microscopy', Microscopy & Microanalysis 25, 563 (2019)
- Jiang et al., 'Electron ptychography of 2D materials to deep sub-angstrom resolution', Nature 559, 343 (2018)
Canonical Datasets
- 4D-STEM benchmark datasets (Ophus group, NCEM)
Acoustic Emission Testing (AE)
Acoustic Emission Testing (AE)
Active Thermography (IR)
Active Thermography (IR)
Adaptive Optics (AO) Imaging
Adaptive Optics (AO) Imaging
Arterial Spin Labeling (ASL) MRI
Arterial Spin Labeling (ASL) MRI
Atom Probe Tomography (APT)
Atom Probe Tomography (APT)
Atomic Force Microscopy (AFM)
Atomic Force Microscopy (AFM)
Bioluminescence Tomography (BLT)
Bioluminescence Tomography (BLT)
Brachytherapy Imaging
Brachytherapy Imaging
Brillouin Microscopy
Brillouin Microscopy
Cathodoluminescence (CL) Imaging
Cathodoluminescence (CL) Imaging
CEST MRI
CEST MRI
Coded Aperture Compressive Temporal Imaging (CACTI)
CACTI captures multiple video frames in a single camera exposure by modulating the scene with a shifting binary mask during the integration period. Each temporal frame sees a different mask pattern, and the detector integrates all modulated frames into a single 2D measurement. The forward model is y = sum_t M_t * x_t + n where M_t is the mask at time t. Typical compression ratios are 8-48 frames per snapshot. Reconstruction exploits temporal correlation via GAP-TV, PnP-FFDNet, or deep unfolding networks (STFormer, EfficientSCI).
Coded Aperture Compressive Temporal Imaging (CACTI)
Description
CACTI captures multiple video frames in a single camera exposure by modulating the scene with a shifting binary mask during the integration period. Each temporal frame sees a different mask pattern, and the detector integrates all modulated frames into a single 2D measurement. The forward model is y = sum_t M_t * x_t + n where M_t is the mask at time t. Typical compression ratios are 8-48 frames per snapshot. Reconstruction exploits temporal correlation via GAP-TV, PnP-FFDNet, or deep unfolding networks (STFormer, EfficientSCI).
Principle
Coded Aperture Compressive Temporal Imaging (CACTI) compresses multiple high-speed video frames into a single sensor exposure by modulating the scene with a dynamic coded aperture (shifting mask) during the integration time. The sensor accumulates a coded sum of B consecutive frames, and computational algorithms recover all B frames from the single compressed measurement using video sparsity priors.
How to Build the System
Build a relay optical system with a physical translating mask or use a DMD as the coded aperture at an intermediate image plane. The mask shifts by one pixel per sub-frame interval during the camera integration time, effectively encoding B temporal frames. Use a standard camera at normal frame rate (e.g., 30 fps) to capture the compressed measurement. Calibrate the mask pattern and its motion precisely.
Common Reconstruction Algorithms
- GAP-TV (Generalized Alternating Projection with Total Variation)
- DeSCI (Decompress Snapshot Compressive Imaging, GMM prior)
- PnP-FFDNet (Plug-and-Play with FFDNet denoiser)
- Deep unfolding: BIRNAT, RevSCI, EfficientSCI
- E2E-trained networks: STFormer, CST (transformer-based)
Common Mistakes
- Mask calibration error causing temporal frame misalignment in reconstruction
- Compression ratio too high (too many sub-frames per snapshot) for the scene motion
- Motion blur within individual sub-frame intervals when scene moves fast
- Non-uniform mask illumination creating brightness gradients in recovered frames
- Choosing masks with poor conditioning (high mutual coherence between rows)
How to Avoid Mistakes
- Calibrate mask position precisely using a static known pattern before experiments
- Limit compression ratio (B ≤ 8-10 for complex natural scenes; B ≤ 24-48 for simpler scenes)
- Ensure sub-frame exposure is short enough that intra-frame motion is negligible
- Flatfield-correct the mask modulation using a uniform target calibration
- Simulate reconstruction quality with candidate mask patterns before hardware fabrication
Forward-Model Mismatch Cases
- The widefield fallback processes a single 2D (64,64) frame, but CACTI compresses B temporal frames into a single 2D coded snapshot using a shifting binary mask — the temporal dimension (64,64,B) is entirely lost
- Without the time-varying coded exposure pattern, individual video frames cannot be separated from the compressed measurement — temporal super-resolution from the fallback is impossible
How to Correct the Mismatch
- Use the CACTI operator that applies frame-wise binary masks and sums the coded frames: y = sum_b(M_b * x_b), compressing B frames into one measurement
- Reconstruct the video sequence using PnP-SCI (plug-and-play with FastDVDnet), ELP-Unfolding, or GAP-TV that model the temporal compression and recover B frames from the single snapshot
Experimental Setup — Signal Chain
Experimental Setup — Details
Benchmark Variants
Key References
- Llull et al., 'Coded aperture compressive temporal imaging', Optics Express 19, 10526 (2011)
- Yuan et al., 'Generalized alternating projection based total variation minimization (GAP-TV)', IEEE ICIP 2016
- Wang et al., 'Spatial-Temporal Transformer for Video Snapshot Compressive Imaging (STFormer)', ECCV 2022
Canonical Datasets
- Kobe, Runner, Drop, Traffic (grayscale SCI benchmarks)
- DAVIS 2017 (adapted for SCI simulation)
Coded Aperture Snapshot Spectral Imaging (CASSI)
CASSI captures a 3D hyperspectral data cube (2 spatial + 1 spectral dimension) in a single 2D camera exposure. The scene is modulated by a binary coded aperture mask, spectrally dispersed by a prism, and integrated onto a 2D detector. The forward model is y = H*x + n where H encodes both coded-aperture modulation and spectral-dispersion shift. Compression ratios equal the number of spectral bands (e.g. 28:1). Reconstruction exploits spectral correlation via GAP-TV, MST, or CST.
Coded Aperture Snapshot Spectral Imaging (CASSI)
Description
CASSI captures a 3D hyperspectral data cube (2 spatial + 1 spectral dimension) in a single 2D camera exposure. The scene is modulated by a binary coded aperture mask, spectrally dispersed by a prism, and integrated onto a 2D detector. The forward model is y = H*x + n where H encodes both coded-aperture modulation and spectral-dispersion shift. Compression ratios equal the number of spectral bands (e.g. 28:1). Reconstruction exploits spectral correlation via GAP-TV, MST, or CST.
Principle
Coded Aperture Snapshot Spectral Imaging (CASSI) captures a full 3-D spectral datacube (x, y, λ) in a single 2-D snapshot by encoding the scene with a binary coded aperture and spectrally dispersing it with a prism onto the detector. Different spectral channels are shifted and superimposed on the sensor, creating a compressed measurement. Computational algorithms recover the full datacube from this single measurement using sparsity priors.
How to Build the System
Build an optical relay with an objective lens, place a binary coded aperture (lithographic chrome-on-glass mask or DMD) at an intermediate image plane, then disperse with an Amici or double-Amici prism, and re-image onto a high-resolution detector (2048× 2048+ pixels). Precisely calibrate the spectral dispersion curve (nm/pixel). The coded aperture pattern should have ~50 % transmittance and good conditioning.
Common Reconstruction Algorithms
- TwIST (Two-step Iterative Shrinkage/Thresholding)
- GAP-TV (Generalized Alternating Projection with Total Variation)
- ADMM with sparsity in DCT or wavelet domain
- Deep unfolding networks (DGSMP, TSA-Net, BIRNAT)
- Plug-and-Play ADMM with learned denoisers
Common Mistakes
- Poor spectral calibration causing wavelength assignment errors across the datacube
- Coded aperture not precisely at the image plane, blurring the code modulation
- Insufficient detector resolution relative to the number of spectral bands
- Ignoring optical aberrations in the dispersive relay that vary with wavelength
- Using a random mask without checking its sensing matrix condition number
How to Avoid Mistakes
- Calibrate spectral mapping with monochromatic sources at known wavelengths
- Mount coded aperture on a precision z-stage and focus to maximize modulation contrast
- Ensure detector pixel count > (spatial pixels × spectral bands) for adequate compression ratio
- Design the relay optics for uniform imaging quality across the spectral range
- Optimize or simulate the mask pattern for low coherence (good RIP) before fabrication
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) grayscale image, but CASSI compresses a 3D spectral datacube (64,64,L wavelengths) into a single 2D coded snapshot via a binary mask and dispersive prism — the spectral dimension is entirely absent
- Without the coded aperture mask and spectral dispersion, the measurement does not encode wavelength-dependent information — spectral unmixing or hyperspectral reconstruction from the fallback output is impossible
How to Correct the Mismatch
- Use the CASSI operator that applies the binary coded aperture mask followed by spectral dispersion (prism/grating shift), producing a 2D coded measurement that encodes the full 3D spectral datacube
- Reconstruct the (x,y,lambda) datacube using compressive sensing (TwIST, GAP-TV) or deep unfolding networks (TSA-Net, MST) that exploit the spatio-spectral structure encoded by the CASSI forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Benchmark Variants
Key References
- Wagadarikar et al., 'Single disperser design for coded aperture snapshot spectral imaging', Applied Optics 47, B44-B51 (2008)
- Cai et al., 'Mask-guided Spectral-wise Transformer (MST++)', CVPRW 2022
Canonical Datasets
- CAVE (Columbia, 32 scenes, 512x512x31)
- KAIST (30 scenes, 2704x3376x28)
- ARAD_1K (1000 hyperspectral images)
Coded Exposure / Flutter Shutter
Coded Exposure / Flutter Shutter
Coherent Anti-Stokes Raman (CARS) Microscopy
Coherent Anti-Stokes Raman (CARS) Microscopy
Coherent Diffractive Imaging / Phase Retrieval
Coherent diffractive imaging (CDI) recovers the complex-valued exit wave from a coherent scattering experiment where only the diffraction intensity |F{O}|^2 is measured (the phase is lost). Phase retrieval algorithms (HIO + ER, Fienup) iteratively enforce constraints in both real space (finite support, non-negativity) and reciprocal space (measured intensity). The oversampling condition (sampling at least 2x the Nyquist rate) ensures sufficient information for unique phase recovery. CDI achieves diffraction-limited resolution without imaging optics. Applications include imaging of nanocrystals, viruses, and materials at X-ray and electron wavelengths.
Coherent Diffractive Imaging / Phase Retrieval
Description
Coherent diffractive imaging (CDI) recovers the complex-valued exit wave from a coherent scattering experiment where only the diffraction intensity |F{O}|^2 is measured (the phase is lost). Phase retrieval algorithms (HIO + ER, Fienup) iteratively enforce constraints in both real space (finite support, non-negativity) and reciprocal space (measured intensity). The oversampling condition (sampling at least 2x the Nyquist rate) ensures sufficient information for unique phase recovery. CDI achieves diffraction-limited resolution without imaging optics. Applications include imaging of nanocrystals, viruses, and materials at X-ray and electron wavelengths.
Principle
Coherent Diffractive Imaging (CDI) records the far-field diffraction pattern of an isolated object illuminated by a coherent beam. Only intensity (not phase) is measured on the detector. Phase retrieval algorithms iteratively recover the lost phase by enforcing known constraints: the measured Fourier modulus and the finite support of the object in real space. CDI achieves diffraction-limited resolution without any imaging lens.
How to Build the System
Illuminate an isolated object (nanocrystal, cell, virus particle) with a coherent, quasi-plane-wave beam (X-ray from synchrotron or XFEL, or visible laser). Record the continuous diffraction pattern on a pixel detector (Eiger, Jungfrau for X-ray; CMOS for visible) placed far enough for adequate oversampling (oversampling ratio ≥ 2 in each dimension). Remove the direct beam with a beam stop. Ensure the object is isolated (no other scatterers in the beam).
Common Reconstruction Algorithms
- Hybrid Input-Output (HIO) algorithm
- Error Reduction (ER) algorithm
- Shrink-Wrap (adaptive support HIO)
- Relaxed Averaged Alternating Reflections (RAAR)
- Deep-learning phase retrieval (PhaseDNN, learned proximal operator)
Common Mistakes
- Insufficient oversampling (detector pixels too coarse or too close to sample)
- Object not truly isolated, violating the support constraint
- Missing low-frequency data due to beam stop causing artifacts
- Stagnation in reconstruction (trapped in local minimum) without proper initialization
- Ignoring partial coherence effects from finite source size or bandwidth
How to Avoid Mistakes
- Ensure oversampling ratio ≥ 2× (linear) in each dimension; use a large detector
- Isolate the object on a thin membrane or in free space; verify no neighbor scattering
- Use low-frequency intensity constraints or a semi-transparent beam stop
- Run multiple random starts and use HIO-ER hybrid strategies to escape local minima
- Model partial coherence in the forward model or select sufficiently coherent beams
Forward-Model Mismatch Cases
- The widefield fallback is a linear operator, but phase retrieval measures only the intensity of the Fourier transform: y = |F{x}|^2 — this is a fundamentally nonlinear (quadratic) measurement that makes reconstruction non-convex
- The fallback preserves the spatial structure of the input, but phase retrieval destroys the phase of the Fourier transform — recovering the original signal from magnitude-only Fourier measurements is a fundamentally different (and harder) inverse problem
How to Correct the Mismatch
- Use the phase retrieval operator implementing y = |FFT(x)|^2 (or |F{x * support}|^2 with known support constraint), producing real-valued intensity measurements of the Fourier magnitude
- Reconstruct using iterative phase retrieval algorithms (Gerchberg-Saxton, HIO, ER) or gradient descent on the non-convex loss, which require the correct quadratic forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Miao et al., 'Extending the methodology of X-ray crystallography to non-crystalline specimens', Nature 400, 342-344 (1999)
- Fienup, 'Phase retrieval algorithms: a comparison', Applied Optics 21, 2758-2769 (1982)
Canonical Datasets
- CXIDB (Coherent X-ray Imaging Data Bank)
- Simulated CDI benchmark (Marchesini et al.)
Compressed Ultrafast Photography (CUP)
Compressed Ultrafast Photography (CUP)
Cone-Beam Computed Tomography
Cone-beam CT (CBCT) uses a divergent cone-shaped X-ray beam and a flat-panel 2D detector to acquire volumetric data in a single rotation, unlike fan-beam CT which acquires slice-by-slice. The 3D Feldkamp-Davis-Kress (FDK) algorithm performs approximate filtered back-projection for cone geometry. CBCT is widely used in dental, ENT, and image-guided radiation therapy. Primary artifacts include cone-beam artifacts at large cone angles, scatter, and truncation. Sparse-view CBCT reduces scan time and dose but introduces streak artifacts.
Cone-Beam Computed Tomography
Description
Cone-beam CT (CBCT) uses a divergent cone-shaped X-ray beam and a flat-panel 2D detector to acquire volumetric data in a single rotation, unlike fan-beam CT which acquires slice-by-slice. The 3D Feldkamp-Davis-Kress (FDK) algorithm performs approximate filtered back-projection for cone geometry. CBCT is widely used in dental, ENT, and image-guided radiation therapy. Primary artifacts include cone-beam artifacts at large cone angles, scatter, and truncation. Sparse-view CBCT reduces scan time and dose but introduces streak artifacts.
Principle
Cone-Beam CT uses a divergent cone-shaped X-ray beam and a 2-D flat-panel detector to acquire a volumetric CT dataset in a single rotation. Unlike multi-slice CT with a narrow fan beam, CBCT covers the full volume simultaneously, enabling faster acquisition but with increased scatter and cone-beam artifacts compared to conventional CT.
How to Build the System
Mount a flat-panel detector (typically 30×40 cm, CsI scintillator) opposite an X-ray tube on a rotating gantry or C-arm. Common implementations: dental CBCT (small FOV, 90 kVp), image-guided radiation therapy CBCT (kV source on linac gantry), and C-arm CBCT (interventional). Calibrate: geometric parameters (source-detector distances, isocenter), detector offset corrections, and scatter correction LUTs.
Common Reconstruction Algorithms
- FDK (Feldkamp-Davis-Kress) cone-beam filtered back-projection
- Iterative CBCT (SART, SIRT with cone-beam projector)
- Scatter correction (measurement-based or Monte Carlo simulation)
- Motion-compensated CBCT (4D-CBCT for respiratory motion)
- Deep-learning CBCT-to-CT synthesis for radiation therapy planning
Common Mistakes
- Severe scatter artifacts (cupping, shading) in large FOV acquisitions
- Cone-beam artifacts near the edges of the FOV (Feldkamp approximation breaks down)
- Truncation artifacts when anatomy extends outside the FOV
- Motion artifacts in thorax/abdomen from respiratory and cardiac motion
- Insufficient angular sampling causing streak artifacts
How to Avoid Mistakes
- Apply scatter correction (anti-scatter grid, software correction, or beam-blocker method)
- Limit cone angle or use exact reconstruction algorithms for large cone angles
- Use extended FOV techniques (shifted detector, multiple scans) for large anatomy
- Apply 4D-CBCT or gated acquisition for moving anatomy
- Acquire sufficient projections (≥600 for a full rotation) with uniform angular spacing
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred (64,64) image, but cone-beam CT acquires a sinogram of shape (n_angles, n_detector_rows * n_detector_cols) from a 2D detector rotating around the patient — the data is a set of cone-beam projections, not a blurred image
- CBCT cone-beam geometry introduces axial cone-angle artifacts (Feldkamp approximation errors) that are absent from the widefield model — any reconstruction expecting cone-beam projection data will fail with the blurred image
How to Correct the Mismatch
- Use the CBCT operator implementing cone-beam projection (Radon transform in 3D divergent geometry) for each source-detector angle, producing the correct sinogram/projection data shape
- Reconstruct using FDK (Feldkamp-Davis-Kress) algorithm or iterative cone-beam methods (SART, ADMM) with the correct cone-beam system matrix
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Feldkamp et al., 'Practical cone-beam algorithm', JOSA A 1, 612-619 (1984)
Canonical Datasets
- ICASSP 2024 CBCT Challenge
Confocal 3D Z-Stack
Three-dimensional confocal imaging by acquiring a z-stack of optical sections. Each slice is convolved with the 3D confocal PSF. The anisotropic PSF (axial resolution ~3x worse than lateral) is a key challenge. 3D Richardson-Lucy or CARE-3D are used for volumetric deconvolution. The forward model is y(x,y,z) = PSF_3d *** x(x,y,z) + n where *** denotes 3D convolution.
Confocal 3D Z-Stack
Description
Three-dimensional confocal imaging by acquiring a z-stack of optical sections. Each slice is convolved with the 3D confocal PSF. The anisotropic PSF (axial resolution ~3x worse than lateral) is a key challenge. 3D Richardson-Lucy or CARE-3D are used for volumetric deconvolution. The forward model is y(x,y,z) = PSF_3d *** x(x,y,z) + n where *** denotes 3D convolution.
Principle
Same confocal principle as live-cell mode but acquiring a full z-stack by stepping the objective or sample through the focal plane. Each optical section is convolved with the 3-D confocal PSF, and the full volume is reconstructed by 3-D deconvolution to recover isotropic resolution.
How to Build the System
Use a high-NA objective (60-100x, 1.4 NA oil or 1.2 NA water) with a piezo z-stage for precise, repeatable z-steps (typ. 200-300 nm). Acquire z-stacks covering the specimen thickness with Nyquist z-sampling. For fixed samples, oil immersion is preferred; for thick tissue, use silicone oil or glycerol objectives to minimize RI mismatch deep in the sample.
Common Reconstruction Algorithms
- 3-D Richardson-Lucy deconvolution
- 3-D Wiener / Tikhonov deconvolution
- Huygens Professional iterative deconvolution
- DeconvolutionLab2 (GPU-accelerated 3-D)
- Deep-learning volumetric restoration (3-D U-Net, RCAN3D)
Common Mistakes
- Using z-step larger than Nyquist, causing axial aliasing
- Depth-dependent spherical aberration from RI mismatch not corrected
- Not accounting for signal attenuation deeper in the sample
- Applying 2-D deconvolution slice-by-slice instead of full 3-D
- Incorrect PSF model (2-D Gaussian instead of 3-D Born & Wolf model)
How to Avoid Mistakes
- Calculate Nyquist z-step (λ / (4·n·(1-cos α))) and sample accordingly
- Use depth-dependent PSF models or adaptive optics for thick specimens
- Apply intensity normalization per z-slice before deconvolution
- Always perform true 3-D deconvolution to preserve axial information
- Use measured 3-D PSF from sub-diffraction beads embedded at the correct depth
Forward-Model Mismatch Cases
- The widefield fallback processes only 2D (64,64) images, but confocal 3D requires volumetric input (32,64,64) — the entire z-stack is discarded, losing all axial information
- Applying 2D deconvolution slice-by-slice instead of true 3D deconvolution produces incorrect axial resolution and misses inter-slice correlations from the 3D PSF
How to Correct the Mismatch
- Use the 3D confocal operator that processes full z-stack volumes with the anisotropic 3D PSF (worse axial than lateral resolution)
- Perform true 3D deconvolution using the measured or modeled 3D confocal PSF; never decompose a z-stack into independent 2D slices
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- McNally et al., 'Three-dimensional imaging by deconvolution microscopy', Methods 23, 210-217 (1999)
- Weigert et al., 'Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks', MICCAI 2017
Canonical Datasets
- Planaria 3D confocal dataset (Weigert et al.)
- BioSR confocal 3D subset
Confocal Laser Endomicroscopy (CLE)
Confocal Laser Endomicroscopy (CLE)
Confocal Live-Cell Microscopy
Laser scanning confocal microscopy for live-cell imaging. A focused laser scans the specimen point by point, and a pinhole rejects out-of-focus light. The image formation is modelled as convolution with the confocal PSF (product of excitation and detection PSFs). Fast acquisition rates for live cells often sacrifice SNR due to short pixel dwell times. Reconstruction involves deconvolution with the confocal PSF and temporal denoising across frames.
Confocal Live-Cell Microscopy
Description
Laser scanning confocal microscopy for live-cell imaging. A focused laser scans the specimen point by point, and a pinhole rejects out-of-focus light. The image formation is modelled as convolution with the confocal PSF (product of excitation and detection PSFs). Fast acquisition rates for live cells often sacrifice SNR due to short pixel dwell times. Reconstruction involves deconvolution with the confocal PSF and temporal denoising across frames.
Principle
A focused laser spot is scanned across the specimen and a pinhole in front of the detector rejects out-of-focus fluorescence, providing optical sectioning. The image formation is modeled as a point-by-point convolution with the confocal PSF (product of excitation and detection PSFs). For live-cell work, speed and gentleness are prioritized.
How to Build the System
Equip a laser-scanning confocal head (e.g., Nikon A1R, Zeiss LSM 980 Airyscan) on an inverted microscope with an environmental enclosure. Use a resonant scanner for fast (30 fps) imaging. Set pinhole to 1 Airy unit for best sectioning or open slightly (1.2 AU) for more signal. Use 40-60x water-immersion objectives for live cells to match RI of aqueous media.
Common Reconstruction Algorithms
- Airyscan joint deconvolution (Zeiss)
- Richardson-Lucy with measured confocal PSF
- Sparse deconvolution (Hessian regularization)
- Deep-learning denoising (Noise2Fast, DnCNN)
- Pixel reassignment (ISM) for resolution doubling
Common Mistakes
- Setting pinhole too small, drastically reducing signal in live cells
- Scanning too slowly, causing phototoxicity and photobleaching
- Using oil-immersion objectives for aqueous samples, introducing spherical aberration
- Ignoring chromatic aberration when imaging multiple channels simultaneously
- Oversampling (too many pixels) leading to excessive total dose with no resolution gain
How to Avoid Mistakes
- Match pinhole to 1 AU and use resonant scanning + frame averaging for speed
- Minimize pixel dwell time and total exposure; use sensitive GaAsP detectors
- Select water-immersion objectives for live aqueous samples
- Calibrate chromatic offsets with multi-color beads and apply corrections
- Follow Nyquist sampling (pixel size ~ 0.4× resolution limit); avoid oversampling
Forward-Model Mismatch Cases
- The widefield fallback uses sigma=2.0, but confocal PSF is sharper (sigma~1.2-1.5) due to the pinhole rejecting out-of-focus light — the fallback over-blurs by 30-60%, destroying resolvable features
- Confocal provides optical sectioning (only in-focus plane contributes signal), while widefield collects fluorescence from all planes — reconstructions using widefield PSF will have incorrect out-of-focus model
How to Correct the Mismatch
- Use the confocal operator with the correct PSF (product of excitation and detection PSFs, effective sigma~1.2-1.5) matching the pinhole size and objective NA
- Model the confocal sectioning effect explicitly; for live-cell work, use the confocal PSF that accounts for pinhole size (1 Airy unit) and emission wavelength
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Minsky, 'Memoir on inventing the confocal microscope', Scanning 10, 128-138 (1988)
- McNally et al., 'Three-dimensional imaging by deconvolution microscopy', Methods 23, 210-217 (1999)
Canonical Datasets
- Cell Tracking Challenge confocal sequences
- BioSR confocal subset
Contrast-Enhanced Ultrasound (CEUS)
Contrast-Enhanced Ultrasound (CEUS)
Correlative Light-Electron Microscopy (CLEM)
Correlative Light-Electron Microscopy (CLEM)
Cryo-Electron Tomography (Cryo-ET)
Cryo-Electron Tomography (Cryo-ET)
Cryo-EM Single Particle Analysis
Cryo-EM Single Particle Analysis
CT + Fluorescence (FLIT)
CT + Fluorescence (FLIT)
Dark-Field Microscopy
Dark-Field Microscopy
DESI Mass Spectrometry Imaging
DESI Mass Spectrometry Imaging
Differential Interference Contrast (DIC)
Differential Interference Contrast (DIC)
Diffuse Optical Tomography
Diffuse optical tomography (DOT) reconstructs 3D maps of tissue optical properties (absorption mu_a and reduced scattering mu_s') by measuring near-infrared light transport through highly scattering tissue. Multiple source-detector pairs on the tissue surface sample the diffuse photon field. The forward model is the diffusion equation: light propagation is modelled as a diffusive process with the photon fluence depending on the spatial distribution of mu_a and mu_s'. Reconstruction linearizes around a homogeneous background (Born/Rytov approximation) or uses nonlinear iterative methods. Applications include breast imaging and functional brain imaging (fNIRS-DOT).
Diffuse Optical Tomography
Description
Diffuse optical tomography (DOT) reconstructs 3D maps of tissue optical properties (absorption mu_a and reduced scattering mu_s') by measuring near-infrared light transport through highly scattering tissue. Multiple source-detector pairs on the tissue surface sample the diffuse photon field. The forward model is the diffusion equation: light propagation is modelled as a diffusive process with the photon fluence depending on the spatial distribution of mu_a and mu_s'. Reconstruction linearizes around a homogeneous background (Born/Rytov approximation) or uses nonlinear iterative methods. Applications include breast imaging and functional brain imaging (fNIRS-DOT).
Principle
Diffuse Optical Tomography reconstructs 3-D maps of tissue optical properties (absorption μₐ and reduced scattering μ'ₛ) from measurements of multiply scattered near-infrared light transmitted through tissue. Multiple source-detector pairs on the tissue surface provide overlapping sensitivity profiles. The diffusion equation models light propagation in the multiple-scattering regime.
How to Build the System
Place fiber-coupled NIR sources (670-850 nm laser diodes, CW or frequency-domain modulated at 100-300 MHz, or time-domain pulsed) and detector fibers (avalanche photodiodes or PMTs) on the tissue surface in an array. A multiplexer switches between source positions. For breast DOT, 32-128 optode positions on a cup or ring geometry. Calibrate with known optical phantoms (Intralipid + ink solutions).
Common Reconstruction Algorithms
- Normalized Born approximation (linearized diffuse optical tomography)
- Nonlinear Newton-type iterative reconstruction (Gauss-Newton, Levenberg-Marquardt)
- Finite-element method (FEM) based forward solver + Tikhonov regularization
- TOAST++ (Time-resolved Optical Absorption and Scattering Tomography)
- Deep-learning DOT (learned regularization, direct inversion networks)
Common Mistakes
- Poor optode-tissue coupling due to hair, uneven surfaces, or insufficient pressure
- Inadequate source-detector pair coverage causing reconstruction blind spots
- Cross-talk between source channels if multiplexing is not properly timed
- Using the diffusion approximation too close to sources or in low-scattering regions
- Ignoring tissue heterogeneity in the background optical property estimate
How to Avoid Mistakes
- Use spring-loaded optodes with coupling checks; shave hair in the measurement area
- Design source-detector geometry with overlapping sensitivity to cover the volume of interest
- Ensure clean channel switching with adequate settling time between multiplexed measurements
- Use higher-order transport models (radiative transfer) near sources if needed
- Initialize reconstruction with patient-specific anatomical prior (from MRI or CT)
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) image, but Diffuse Optical Tomography acquires boundary measurements (source-detector pairs) — output shape (64,) is a 1D vector of photon counts at detector positions
- DOT measurement physics involves diffuse light propagation through scattering tissue (modeled by the diffusion equation), which is fundamentally different from surface-level Gaussian blur — the fallback cannot model subsurface absorption and scattering
How to Correct the Mismatch
- Use the DOT operator that models photon transport via the diffusion equation: Jacobian maps from interior optical properties (absorption, scattering) to boundary measurements at each source-detector pair
- Reconstruct interior absorption/scattering maps using Tikhonov-regularized inversion or iterative methods (conjugate gradient) with the correct diffusion-equation-based forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Arridge, 'Optical tomography in medical imaging', Inverse Problems 15, R41-R93 (1999)
- Boas et al., 'Imaging the body with diffuse optical tomography', IEEE Signal Processing Magazine 18, 57-75 (2001)
Canonical Datasets
- UCL DOT phantom datasets
- BU fNIRS-DOT brain imaging benchmarks
Diffusion MRI (DTI)
Diffusion MRI measures the random Brownian motion of water molecules in tissue by applying magnetic field gradient pulses that encode microscopic displacement. The signal attenuation follows S = S_0 * exp(-b * D_eff) where b is the diffusion weighting factor and D_eff is the effective diffusion coefficient along the gradient direction. Acquiring measurements in multiple gradient directions enables estimation of the diffusion tensor (DTI) and derived scalar maps (FA, MD, AD, RD). Advanced models (NODDI, CSD) resolve intra-voxel fiber crossings. Primary degradations include EPI distortion, eddy currents, and motion sensitivity.
Diffusion MRI (DTI)
Description
Diffusion MRI measures the random Brownian motion of water molecules in tissue by applying magnetic field gradient pulses that encode microscopic displacement. The signal attenuation follows S = S_0 * exp(-b * D_eff) where b is the diffusion weighting factor and D_eff is the effective diffusion coefficient along the gradient direction. Acquiring measurements in multiple gradient directions enables estimation of the diffusion tensor (DTI) and derived scalar maps (FA, MD, AD, RD). Advanced models (NODDI, CSD) resolve intra-voxel fiber crossings. Primary degradations include EPI distortion, eddy currents, and motion sensitivity.
Principle
Diffusion MRI sensitizes the MR signal to the Brownian motion of water molecules by applying strong magnetic field gradient pulses (Stejskal-Tanner scheme). In fibrous tissue (e.g., white matter), water diffuses preferentially along fibers, creating directional diffusion anisotropy. Diffusion Tensor Imaging (DTI) models this as a 3×3 tensor; higher-order models (HARDI, CSD) resolve crossing fibers.
How to Build the System
Acquire on a 3T scanner with high-performance gradients (80 mT/m, 200 T/m/s). Use spin-echo EPI with multiple b-values (e.g., b=0, 1000, 2000 s/mm²) and 30-300 diffusion directions uniformly distributed on the sphere. Include reverse-phase-encode b=0 images for EPI distortion correction. Multi-band (SMS) acceleration reduces scan time. Typical parameters: 2 mm isotropic, TE 60-90 ms, TR 3-5 s.
Common Reconstruction Algorithms
- DTI tensor fitting (least-squares or weighted least-squares)
- CSD (Constrained Spherical Deconvolution) for fiber orientation distribution
- NODDI (Neurite Orientation Dispersion and Density Imaging)
- Probabilistic tractography (FSL probtrackx, MRtrix3 iFOD2)
- Deep-learning tract segmentation (TractSeg, DeepBundle)
Common Mistakes
- Eddy current and EPI geometric distortions not corrected, causing tract errors
- Insufficient number of diffusion directions for the chosen model complexity
- Using DTI in regions with crossing fibers, producing incorrect FA and tract directions
- Susceptibility-induced signal dropout near air-tissue interfaces (sinuses, temporal lobes)
- Head motion between diffusion volumes causing inter-volume misalignment
How to Avoid Mistakes
- Apply FSL eddy or equivalent for eddy current, motion, and susceptibility correction
- Use ≥30 directions for DTI, ≥60 for CSD, and ≥90 for multi-shell models
- Use multi-fiber models (CSD, NODDI) in regions known to have crossing fibers
- Use reduced FOV or multi-shot EPI near susceptibility-prone regions
- Include interspersed b=0 volumes for robust motion and drift correction
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred spatial image, but diffusion MRI applies magnetic field gradients to encode Brownian water motion — the Stejskal-Tanner signal attenuation S = S_0*exp(-b*D) is not modeled
- Diffusion MRI acquires multiple volumes at different b-values and gradient directions to measure the diffusion tensor at each voxel — the widefield single-image model cannot encode directional water diffusivity or fiber orientation
How to Correct the Mismatch
- Use the diffusion MRI operator that applies Stejskal-Tanner encoding: y_i = FFT(x * exp(-b_i * g_i^T * D * g_i)) for each gradient direction g_i and b-value b_i
- Reconstruct diffusion tensors (DTI) or fiber orientation distributions (CSD, NODDI) from the multi-direction, multi-b-value measurements using the correct diffusion-weighted forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Basser et al., 'MR diffusion tensor spectroscopy and imaging', Biophysical Journal 66, 259-267 (1994)
- Sotiropoulos et al., 'Advances in diffusion MRI acquisition and processing in the HCP', NeuroImage 80, 125-143 (2013)
Canonical Datasets
- Human Connectome Project (HCP) diffusion data
- UK Biobank diffusion imaging
Digital Breast Tomosynthesis (DBT)
Digital Breast Tomosynthesis (DBT)
Digital Holographic Microscopy
Digital holographic microscopy (DHM) records the interference pattern between an object wave (scattered by the sample) and a reference wave on a digital sensor. The hologram encodes both amplitude and phase of the object wavefield. In off-axis configuration, the object spectrum is separated from the zero-order and twin-image terms in Fourier space. Numerical propagation (angular spectrum method) refocuses the wavefield at any desired plane, enabling quantitative phase imaging (QPI) with nanometer path-length sensitivity. Applications include label-free cell imaging and topography measurement.
Digital Holographic Microscopy
Description
Digital holographic microscopy (DHM) records the interference pattern between an object wave (scattered by the sample) and a reference wave on a digital sensor. The hologram encodes both amplitude and phase of the object wavefield. In off-axis configuration, the object spectrum is separated from the zero-order and twin-image terms in Fourier space. Numerical propagation (angular spectrum method) refocuses the wavefield at any desired plane, enabling quantitative phase imaging (QPI) with nanometer path-length sensitivity. Applications include label-free cell imaging and topography measurement.
Principle
Digital holographic microscopy records the interference pattern (hologram) between a reference wave and the wave scattered by the sample. The complex field (amplitude and phase) is recovered by numerical propagation of the recorded hologram to the object plane. Phase imaging reveals optical path length changes caused by refractive index or thickness variations, providing quantitative phase contrast without staining.
How to Build the System
Build an off-axis Mach-Zehnder interferometer: split a coherent source (He-Ne laser, 633 nm, or laser diode) into object and reference beams. The object beam passes through the sample via a microscope objective. The reference beam tilts at a small angle (off-axis) to create carrier fringes. Both beams interfere on a CMOS camera. The carrier frequency must be high enough to separate the twin image in Fourier space. Vibration isolation is essential.
Common Reconstruction Algorithms
- Fourier filtering (off-axis hologram: spatial filtering of +1 order)
- Angular spectrum propagation method
- Phase unwrapping (Goldstein, quality-guided, or least-squares)
- Numerical autofocusing (Tamura coefficient, Brenner gradient)
- Deep-learning phase retrieval (PhaseNet, holographic reconstruction CNN)
Common Mistakes
- Vibration causing fringe instability and phase noise
- Twin image and DC term not properly separated in on-axis holography
- Phase wrapping artifacts not resolved in thick or rapidly varying samples
- Coherence noise (speckle) from high temporal coherence of the laser source
- Incorrect propagation distance causing defocused reconstruction
How to Avoid Mistakes
- Use an optical table with active vibration isolation; enclose the setup
- Use off-axis geometry with sufficient carrier frequency for clean Fourier separation
- Apply robust phase unwrapping algorithms; use multi-wavelength for large OPD
- Use a low-coherence source (LED or SLD) for speckle reduction in off-axis DHM
- Implement numerical autofocusing or calibrate propagation distance precisely
Forward-Model Mismatch Cases
- The widefield fallback produces real-valued output, but holography records complex-valued interference between object and reference waves — the phase information encoding 3D depth and optical path length is completely lost
- The interference fringe pattern (I = |E_ref + E_obj|^2) encodes both amplitude and phase of the object wave, enabling numerical refocusing — the Gaussian blur destroys the fringe structure and all quantitative phase information
How to Correct the Mismatch
- Use the holography operator that models the coherent interference between object wave (after propagation) and reference wave, producing complex-valued holographic data
- Reconstruct amplitude and phase by digital holographic processing: Fourier filtering to isolate the sideband, numerical back-propagation using the angular spectrum method or Fresnel transform
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Cuche et al., 'Digital holography for quantitative phase-contrast imaging', Optics Letters 24, 291-293 (1999)
- Kim, 'Principles and techniques of digital holographic microscopy', SPIE Reviews 1, 018005 (2010)
Canonical Datasets
- Lyncee Tec DHM application datasets
- HoloGAN benchmark (simulated holograms)
DNA-PAINT Super-Resolution
DNA-PAINT Super-Resolution
Doppler Ultrasound
Doppler ultrasound measures blood flow velocity by detecting the frequency shift of ultrasound echoes reflected from moving red blood cells. The Doppler shift f_d = 2*f_0*v*cos(theta)/c relates velocity v to the observed frequency shift. Color Doppler maps 2D velocity fields by applying autocorrelation estimators to ensembles of pulse-echo data at each spatial location. A wall filter (high-pass) separates slow tissue clutter from blood flow signals. Challenges include aliasing when velocity exceeds the Nyquist limit (PRF/2) and angle-dependence of the velocity estimate.
Doppler Ultrasound
Description
Doppler ultrasound measures blood flow velocity by detecting the frequency shift of ultrasound echoes reflected from moving red blood cells. The Doppler shift f_d = 2*f_0*v*cos(theta)/c relates velocity v to the observed frequency shift. Color Doppler maps 2D velocity fields by applying autocorrelation estimators to ensembles of pulse-echo data at each spatial location. A wall filter (high-pass) separates slow tissue clutter from blood flow signals. Challenges include aliasing when velocity exceeds the Nyquist limit (PRF/2) and angle-dependence of the velocity estimate.
Principle
Doppler ultrasound measures blood flow velocity by detecting the frequency shift of echoes reflected from moving red blood cells. The Doppler equation relates the frequency shift to velocity: Δf = 2f₀·v·cos(θ)/c, where θ is the beam-flow angle. Color Doppler maps velocity spatially, spectral Doppler provides velocity-time waveforms at a sample volume, and power Doppler shows flow amplitude regardless of direction.
How to Build the System
Use a clinical ultrasound system with Doppler capability. For vascular studies, use a linear array transducer (5-12 MHz). Steer the beam to achieve a Doppler angle <60° to the vessel axis. Set the velocity scale (PRF) to match expected flow speeds (avoid aliasing). For spectral Doppler, place the sample volume within the vessel lumen and adjust the gate size. Angle correction must be applied for accurate velocity measurements.
Common Reconstruction Algorithms
- Autocorrelation-based color flow estimation (Kasai algorithm)
- FFT spectral analysis for pulsed-wave Doppler
- Clutter filtering (wall filtering) to remove tissue motion
- Power Doppler (amplitude mode) for slow flow detection
- Ultrafast Doppler (plane-wave compounding) for functional ultrasound
Common Mistakes
- Doppler angle >60° causing large velocity measurement errors
- Aliasing in color or spectral Doppler from PRF set too low for flow velocity
- Wall filter too aggressive, eliminating slow venous flow signals
- Blooming artifact in color Doppler from excessive gain
- Not correcting for angle in spectral Doppler velocity measurements
How to Avoid Mistakes
- Maintain Doppler angle <60°; ideally 30-60° for best accuracy
- Increase PRF (velocity scale) until aliasing resolves; or use CW Doppler
- Reduce wall filter setting when looking for slow flow (venous, microvascular)
- Reduce color Doppler gain until color just fills the vessel without overflow
- Always apply angle correction cursor parallel to the vessel wall for spectral Doppler
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) image, but Doppler ultrasound acquires velocity-encoded data — output includes blood flow velocity maps estimated from phase shifts between consecutive pulses
- Doppler measurement relies on the frequency shift of backscattered ultrasound from moving blood cells (f_d = 2*v*cos(theta)*f_0/c) — the widefield spatial blur has no velocity or frequency-shift information
How to Correct the Mismatch
- Use the Doppler ultrasound operator that models pulsed-wave Doppler: multiple pulses along each line, with phase differences between returns encoding blood flow velocity
- Estimate velocity using autocorrelation (Kasai estimator) or spectral Doppler analysis on the correctly modeled multi-pulse RF data, then map to color flow images
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Kasai et al., 'Real-time two-dimensional blood flow imaging using an autocorrelation technique', IEEE Trans. Sonics Ultrasonics 32, 458-464 (1985)
Canonical Datasets
- Clinical Doppler benchmark collections
Dual-Energy X-ray Absorptiometry
DEXA measures bone mineral density (BMD) by acquiring two X-ray projections at different energies (typically 70 and 140 kVp) and decomposing the attenuation into bone and soft-tissue components using their known energy-dependent mass attenuation coefficients. The dual-energy forward model is y_E = I_0(E) * exp(-(mu_b(E)*t_b + mu_s(E)*t_s)) + n for each energy E. Output is areal BMD (g/cm2) and T-score for osteoporosis diagnosis. Precision errors of ~1% are achievable.
Dual-Energy X-ray Absorptiometry
Description
DEXA measures bone mineral density (BMD) by acquiring two X-ray projections at different energies (typically 70 and 140 kVp) and decomposing the attenuation into bone and soft-tissue components using their known energy-dependent mass attenuation coefficients. The dual-energy forward model is y_E = I_0(E) * exp(-(mu_b(E)*t_b + mu_s(E)*t_s)) + n for each energy E. Output is areal BMD (g/cm2) and T-score for osteoporosis diagnosis. Precision errors of ~1% are achievable.
Principle
Dual-Energy X-ray Absorptiometry uses two X-ray beam energies to decompose the body into bone mineral and soft tissue compartments. The differential attenuation of the two energies allows separation of bone from soft tissue. Bone mineral density (BMD, g/cm²) is computed by comparing attenuation to calibration phantoms.
How to Build the System
A DEXA scanner (Hologic Discovery/Horizon or GE Lunar) uses a fan-beam or pencil-beam X-ray source with two energies (typically 70 and 140 kVp, or k-edge filtration). The detector is directly opposite the source below the patient table. Daily quality assurance with a calibration phantom (anthropomorphic spine) is mandatory. Cross-calibration is needed when changing scanners. Scan modes include AP spine, dual femur, whole body, and lateral vertebral assessment.
Common Reconstruction Algorithms
- Dual-energy decomposition (two-material model: bone + soft tissue)
- Edge detection for region-of-interest (ROI) identification
- BMD calculation relative to calibration phantom
- T-score / Z-score computation against normative databases
- Body composition analysis (lean mass, fat mass from whole-body scans)
Common Mistakes
- Patient positioning errors (rotation, wrong vertebral level) affecting BMD
- Not removing metal objects (belts, jewelry) that artifactually increase BMD
- Comparing BMD values from different scanner manufacturers without cross-calibration
- Degenerative changes (osteophytes) falsely elevating spine BMD
- Analyzing the wrong vertebral levels or including fractured vertebrae
How to Avoid Mistakes
- Standardize patient positioning with positioning aids; verify on scout image
- Remove all metal from scan field; use lateral spine view to avoid artifacts
- Use same scanner for serial monitoring; cross-calibrate if changing equipment
- Evaluate AP spine image for degenerative changes; consider lateral spine or femur
- Follow ISCD guidelines for vertebral inclusion/exclusion criteria in analysis
Forward-Model Mismatch Cases
- The widefield fallback produces a single 2D (64,64) image, but DEXA acquires dual-energy X-ray measurements — output shape (2,64,64) has two channels (high and low energy) for material decomposition
- DEXA uses the energy-dependent difference in attenuation between bone and soft tissue to measure bone mineral density — the single-energy widefield blur cannot distinguish materials and produces no BMD information
How to Correct the Mismatch
- Use the DEXA operator that models dual-energy Beer-Lambert transmission: y_E = I_0(E) * exp(-(mu_bone(E)*t_bone + mu_tissue(E)*t_tissue)) for E = low and high energy
- Decompose the dual-energy measurements into bone and soft tissue components using the known energy-dependent attenuation coefficients to compute areal bone mineral density (g/cm^2)
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Blake & Fogelman, 'The role of DXA bone density scans in the diagnosis and treatment of osteoporosis', Postgrad. Med. J. 83, 509-517 (2007)
Canonical Datasets
- NHANES DXA reference data (CDC)
Eddy Current Imaging
Eddy Current Imaging
Electrical Impedance Tomography (EIT)
Electrical Impedance Tomography (EIT)
Electron Backscatter Diffraction
EBSD maps crystallographic orientation by tilting a polished specimen to ~70 degrees in an SEM and recording Kikuchi diffraction patterns on a phosphor screen. Each pattern encodes the local crystal orientation, which is determined by automated indexing (Hough transform or dictionary indexing). Scanning the beam produces orientation maps (IPF), grain boundary maps, and texture information. Challenges include pattern quality degradation from surface damage, pseudosymmetry in indexing, and angular resolution limitations (~0.5 deg).
Electron Backscatter Diffraction
Description
EBSD maps crystallographic orientation by tilting a polished specimen to ~70 degrees in an SEM and recording Kikuchi diffraction patterns on a phosphor screen. Each pattern encodes the local crystal orientation, which is determined by automated indexing (Hough transform or dictionary indexing). Scanning the beam produces orientation maps (IPF), grain boundary maps, and texture information. Challenges include pattern quality degradation from surface damage, pseudosymmetry in indexing, and angular resolution limitations (~0.5 deg).
Principle
Electron Backscatter Diffraction (EBSD) maps the crystallographic orientation of polycrystalline materials at each surface point. A focused electron beam (15-30 keV) strikes a tilted (70°) polished specimen, generating backscattered electrons that form Kikuchi diffraction patterns on a phosphor screen/CMOS camera. Automated pattern indexing determines the crystal orientation at each point with ~0.5° angular resolution.
How to Build the System
Install an EBSD detector (phosphor screen + CCD/CMOS camera, e.g., Oxford Instruments Symmetry, EDAX Velocity) in an SEM chamber. Tilt the specimen to 70° toward the detector. Polish the sample surface to remove any deformation layer (final step: colloidal silica or ion milling). Set accelerating voltage 15-30 kV, high probe current (1-20 nA). Map with step sizes of 50 nm to 5 μm depending on grain size.
Common Reconstruction Algorithms
- Hough transform band detection for Kikuchi pattern indexing
- Dictionary indexing (template matching against simulated patterns)
- Spherical indexing (GPU-accelerated orientation determination)
- Neighbor pattern averaging and reindexing (NPAR) for noisy patterns
- Deep-learning EBSD pattern indexing (faster and more robust than Hough)
Common Mistakes
- Poor surface preparation leaving a deformed layer that degrades pattern quality
- Camera settings (gain, exposure) not optimized, producing noisy or saturated patterns
- Step size too large relative to the grain size, missing small grains or twin boundaries
- Incorrect crystal structure or phase files used for indexing
- Drift during long-duration EBSD maps distorting the scanned area
How to Avoid Mistakes
- Use final polishing with colloidal silica (OPS) or broad Ar-ion milling
- Optimize camera parameters with a reference crystal before mapping
- Set step size ≤ 1/10 of the smallest grain dimension of interest
- Verify crystal structure and lattice parameters in the phase file before indexing
- Use beam shift or stage drift correction for maps longer than ~30 minutes
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred intensity image, but EBSD acquires Kikuchi diffraction patterns at each probe position — each pattern encodes the local crystal orientation (Euler angles) via characteristic Kikuchi bands
- EBSD is fundamentally a crystallographic technique where the measurement is a diffraction pattern, not a spatial image — the widefield blur cannot produce orientation maps, grain boundaries, or texture information
How to Correct the Mismatch
- Use the EBSD operator that models Kikuchi pattern generation from electron backscatter diffraction at each beam position, with pattern features determined by the local crystal orientation and structure
- Index Kikuchi patterns using Hough transform (band detection) or dictionary-based matching to determine the crystal orientation (Euler angles) at each probe position, then assemble orientation maps
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Schwartz et al., 'Electron Backscatter Diffraction in Materials Science', Springer (2009)
Canonical Datasets
- DREAM.3D synthetic EBSD benchmarks
Electron Energy Loss Spectroscopy
STEM-EELS measures the energy distribution of electrons transmitted through a thin specimen, where inelastic scattering events encode information about elemental composition, bonding, and electronic structure. The energy loss spectrum contains core-loss edges (characteristic of specific elements) and low-loss features (plasmons, band gaps). A magnetic prism spectrometer disperses the energy spectrum onto a position-sensitive detector. Spectrum imaging acquires a full spectrum at each scan position, enabling elemental mapping with atomic-scale spatial resolution.
Electron Energy Loss Spectroscopy
Description
STEM-EELS measures the energy distribution of electrons transmitted through a thin specimen, where inelastic scattering events encode information about elemental composition, bonding, and electronic structure. The energy loss spectrum contains core-loss edges (characteristic of specific elements) and low-loss features (plasmons, band gaps). A magnetic prism spectrometer disperses the energy spectrum onto a position-sensitive detector. Spectrum imaging acquires a full spectrum at each scan position, enabling elemental mapping with atomic-scale spatial resolution.
Principle
Electron Energy Loss Spectroscopy measures the energy lost by transmitted electrons due to inelastic interactions with the specimen. The energy-loss spectrum contains characteristic edges corresponding to inner-shell ionization of specific elements, enabling elemental mapping with atomic spatial resolution. Near-edge fine structure (ELNES) reveals chemical bonding, and low-loss features probe band structure and optical properties.
How to Build the System
Attach a post-column energy filter (Gatan GIF Quantum/Continuum) to a TEM/STEM. For STEM-EELS spectrum imaging: scan the probe and record a full energy-loss spectrum (0-2000 eV range) at each pixel. Use a monochromated source (ΔE < 0.3 eV) for near-edge fine structure studies. Energy dispersion is typically 0.1-0.5 eV/channel. Acquire both core-loss edges (elemental maps) and low-loss region (thickness mapping, optical properties).
Common Reconstruction Algorithms
- Background subtraction (power-law fitting before edge onset)
- Multiple linear least-squares (MLLS) fitting for overlapping edges
- Principal component analysis (PCA) for denoising spectrum images
- Kramers-Kronig analysis for optical constants from low-loss EELS
- Deep-learning EELS denoising and quantification
Common Mistakes
- Specimen too thick causing plural scattering that distorts edge shapes
- Incorrect background model for edge extraction (wrong fitting window)
- Energy drift during long spectrum-image acquisitions
- Not accounting for plural scattering when quantifying elemental ratios
- Beam damage altering the specimen chemistry during EELS acquisition
How to Avoid Mistakes
- Keep specimen thickness < 0.5 inelastic mean free path (t/λ < 0.5)
- Fit background in a window just before the edge; use multiple-window methods if needed
- Apply energy drift correction using the zero-loss peak or a known edge
- Deconvolve plural scattering using Fourier-log method before quantification
- Use low-dose protocols and fast spectrum imaging to minimize beam damage
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D spatial image, but EELS acquires energy-loss spectra at each probe position — the spectral dimension encoding elemental composition (core-loss edges) and electronic structure (near-edge fine structure) is entirely absent
- Each EELS spectrum contains characteristic ionization edges (e.g., C-K at 284 eV, O-K at 532 eV) that identify elements with atomic spatial resolution — the widefield spatial blur cannot access spectroscopic chemical information
How to Correct the Mismatch
- Use the EELS operator that models energy-loss spectrum formation: each probe position produces a spectrum with background (power-law), core-loss edges (proportional to elemental concentration), and near-edge fine structure (bonding information)
- Quantify elemental maps using background subtraction and edge integration, or MLLS fitting for overlapping edges; apply PCA denoising to spectrum images before quantification
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Egerton, 'Electron Energy-Loss Spectroscopy in the Electron Microscope', Springer (2011)
Canonical Datasets
- EELS Atlas (Ahn & Krivanek)
Electron Holography
Off-axis electron holography records the interference pattern between an object wave (passed through the specimen) and a reference wave (passed through vacuum) using an electrostatic biprism. The hologram encodes the phase shift imparted by electric and magnetic fields within the specimen. Fourier filtering isolates the sideband carrying the complex wave information, from which amplitude and phase are extracted. Phase sensitivity of ~2*pi/1000 enables mapping of nanoscale electric and magnetic fields in materials.
Electron Holography
Description
Off-axis electron holography records the interference pattern between an object wave (passed through the specimen) and a reference wave (passed through vacuum) using an electrostatic biprism. The hologram encodes the phase shift imparted by electric and magnetic fields within the specimen. Fourier filtering isolates the sideband carrying the complex wave information, from which amplitude and phase are extracted. Phase sensitivity of ~2*pi/1000 enables mapping of nanoscale electric and magnetic fields in materials.
Principle
Electron holography uses the interference between an object wave (transmitted through the specimen) and a reference wave (passing through vacuum) to record both amplitude and phase of the electron wave. An electrostatic biprism (charged wire) deflects the two waves to overlap and form interference fringes. Numerical reconstruction recovers the phase shift, which is sensitive to electrostatic potentials and magnetic fields in the specimen.
How to Build the System
Use a TEM (≥200 kV, FEG source for high coherence) equipped with an electron biprism (a thin metallized quartz fiber at adjustable voltage 50-300 V). Position the specimen so one half of the biprism overlaps the specimen edge and the other half is in vacuum. Record the hologram on a direct-electron detector. Fringe spacing should be 3-4× the desired resolution. Acquire reference holograms (empty) for normalization.
Common Reconstruction Algorithms
- Fourier filtering (sideband extraction and inverse FFT for phase/amplitude)
- Phase unwrapping for large phase shifts (>2π)
- Mean inner potential measurement from phase maps
- Magnetic induction mapping (from phase gradient of Lorentz holography)
- In-line holography (through-focus series) with transport-of-intensity equation
Common Mistakes
- Biprism voltage too low, giving insufficient overlap and poor fringe contrast
- Fresnel fringes from specimen edge contaminating the holographic fringes
- Not acquiring and dividing by a reference hologram, leaving biprism distortions
- Specimen too thick, reducing fringe visibility from inelastic scattering
- Stray magnetic fields causing unwanted phase shifts in the reference wave
How to Avoid Mistakes
- Optimize biprism voltage for 3-4× oversampling of desired resolution with good contrast
- Extend vacuum reference beyond the specimen edge; mask Fresnel fringe regions
- Always acquire reference holograms and compute the normalized phase
- Use thin specimens (< 50-80 nm) to maintain fringe contrast above 10%
- Enclose the TEM column in mu-metal shielding; degauss the objective lens for Lorentz mode
Forward-Model Mismatch Cases
- The widefield fallback produces real-valued output, but electron holography records the interference between object and reference electron waves — the complex-valued hologram encodes electromagnetic potentials (electric and magnetic fields) inside the specimen via the Aharonov-Bohm phase shift
- The biprism interference fringes encode quantitative phase information (phase shift = C_E * integral(V(x,y,z)dz) for electrostatic, and -(e/hbar) * integral(A*dl) for magnetic) — the widefield blur destroys fringe contrast and all phase information
How to Correct the Mismatch
- Use the electron holography operator that models biprism-mediated interference between object wave (with Aharonov-Bohm phase shift) and vacuum reference wave, producing complex holographic fringes
- Reconstruct phase maps using Fourier sideband filtering and inverse FFT; for magnetic specimens, use Lorentz mode and separate electrostatic and magnetic phase contributions
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Dunin-Borkowski et al., 'Electron holography of nanostructured materials', Encyclopedia of Nanoscience and Nanotechnology (2004)
- Lichte & Lehmann, 'Electron holography — basics and applications', Rep. Prog. Phys. 71, 016102 (2008)
Canonical Datasets
- Holography benchmark datasets (Forschungszentrum Julich)
Electron Tomography
Electron tomography reconstructs 3D structure from a tilt series of 2D projections acquired as the specimen is rotated (+/-60-70 deg, 1-2 deg increments). The missing wedge of angular coverage causes elongation artifacts along the beam direction. Alignment of the tilt series (using fiducial gold markers or cross-correlation) is critical. Reconstruction uses WBP, SIRT, or compressed sensing methods with TV priors to mitigate missing-wedge artifacts.
Electron Tomography
Description
Electron tomography reconstructs 3D structure from a tilt series of 2D projections acquired as the specimen is rotated (+/-60-70 deg, 1-2 deg increments). The missing wedge of angular coverage causes elongation artifacts along the beam direction. Alignment of the tilt series (using fiducial gold markers or cross-correlation) is critical. Reconstruction uses WBP, SIRT, or compressed sensing methods with TV priors to mitigate missing-wedge artifacts.
Principle
Electron tomography reconstructs a 3-D volume from a tilt series of 2-D TEM or STEM projections acquired at different specimen tilts (typically ±60-70°). The Radon transform (or its generalization) relates the projections to the 3-D structure. The limited tilt range causes a 'missing wedge' artifact — elongation in the beam direction — which must be addressed by regularization or dual-axis acquisition.
How to Build the System
Use a TEM/STEM with a high-tilt specimen holder (±70-80°). Acquire images at tilt increments of 1-2° across the full range. For STEM tomography, HAADF signal provides monotonic contrast (no CTF complications). Include gold nanoparticles as fiducial markers for alignment. Automated acquisition software (SerialEM, Tomography by Thermo Fisher) controls stage tilt, focus tracking, and image acquisition.
Common Reconstruction Algorithms
- Weighted back-projection (WBP)
- SIRT / SART (Simultaneous Iterative Reconstruction Techniques)
- GENFIRE (GENeralized Fourier Iterative REconstruction)
- Compressed sensing tomography for missing-wedge artifact reduction
- Deep-learning tomographic reconstruction (TomoGAN, DeepRecon)
Common Mistakes
- Poor tilt-series alignment causing blurring in the reconstruction
- Missing wedge artifacts not addressed, distorting features along the beam axis
- Specimen drift or deformation during the tilt series (especially for biological specimens)
- Dose damage accumulating through the tilt series degrading later images
- Inaccurate tilt angles due to stage mechanical backlash
How to Avoid Mistakes
- Align tilt series carefully using fiducial markers; refine with cross-correlation
- Use dual-axis tomography or compressed-sensing reconstruction to fill the missing wedge
- Apply autofocus and drift tracking at each tilt; use cryo-conditions for biology
- Distribute dose evenly; start at high tilts where damage impact is greatest
- Calibrate stage tilt angle accuracy; use Saxton scheme (non-linear tilt increments)
Forward-Model Mismatch Cases
- The widefield fallback processes only 2D (64,64) images, but electron tomography acquires a tilt series — projections at multiple angles through the 3D specimen volume, with output shape (n_tilts, H, W)
- The missing wedge problem (limited tilt range, typically +/- 70 degrees) is specific to electron tomography and cannot be modeled by the widefield operator — reconstructions without accounting for missing data have severe elongation artifacts
How to Correct the Mismatch
- Use the electron tomography operator that generates projection images at each tilt angle via the Radon transform applied to the 3D specimen density, including the limited tilt range constraint
- Reconstruct using weighted back-projection (WBP), SIRT, or compressed-sensing methods that account for the missing wedge and alignment errors between tilt images
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Frank, 'Electron Tomography', Springer (2006)
- Midgley & Dunin-Borkowski, 'Electron tomography and holography in materials science', Nature Materials 8, 271 (2009)
Canonical Datasets
- EMPIAR cryo-ET tilt series (e.g., EMPIAR-10045)
- ETDB (Electron Tomography Database, Caltech)
Entangled Photon Microscopy
Entangled Photon Microscopy
Event Camera / Dynamic Vision Sensor (DVS)
Event Camera / Dynamic Vision Sensor (DVS)
Event Horizon Telescope (EHT) Imaging
Event Horizon Telescope (EHT) Imaging
Expansion Microscopy (ExM)
Expansion Microscopy (ExM)
Fiber Bundle Endoscopy
Fiber bundle endoscopy transmits images through a coherent fiber bundle of 10,000-50,000 individual optical fibers. Each fiber core acts as a spatial sample, producing a honeycomb pattern. Image quality is limited by inter-core spacing (pixelation), inter-core coupling (crosstalk), and core-to-core transmission variation. White-light or narrow-band illumination is delivered through the bundle or alongside it. Reconstruction involves core localization, transmission calibration, interpolation to a regular grid, and denoising.
Fiber Bundle Endoscopy
Description
Fiber bundle endoscopy transmits images through a coherent fiber bundle of 10,000-50,000 individual optical fibers. Each fiber core acts as a spatial sample, producing a honeycomb pattern. Image quality is limited by inter-core spacing (pixelation), inter-core coupling (crosstalk), and core-to-core transmission variation. White-light or narrow-band illumination is delivered through the bundle or alongside it. Reconstruction involves core localization, transmission calibration, interpolation to a regular grid, and denoising.
Principle
Fiber-bundle endoscopy transmits an image through a flexible coherent fiber bundle (10,000-100,000 individual fiber cores) to visualize internal body cavities. Each fiber core acts as a single pixel, transmitting light from the distal end to the proximal end where a camera captures the image. The hexagonal fiber packing imposes a fixed pixelation pattern (comb/honeycomb structure) on the image.
How to Build the System
A medical endoscope has a flexible insertion tube containing the coherent fiber bundle (or a distal CMOS chip for video endoscopes), illumination fibers, working channels, and air/water channels. Light source: LED or Xenon lamp transmitted through illumination fibers. For fiber-bundle type: attach a high-resolution camera and relay lens at the proximal end. Calibrate fiber core positions and individual fiber transmission for computational image improvement.
Common Reconstruction Algorithms
- Fiber core mapping and interpolation (honeycomb artifact removal)
- Deep-learning super-resolution for fiber-bundle images
- Structure-from-motion for endoscopic 3-D reconstruction
- Defogging / dehazing for underwater or smoke-obscured endoscopy
- Real-time mosaicking for extended field-of-view endoscopy
Common Mistakes
- Honeycomb pattern artifact from fiber core spacing not removed
- Broken fibers (dark spots) accumulating over time and degrading image quality
- Specular reflections (glare) from wet tissue surfaces saturating the image
- Insufficient illumination causing noisy images in deep body cavities
- Image distortion from fiber bundle bending not corrected
How to Avoid Mistakes
- Apply fiber core interpolation or deep-learning super-resolution in post-processing
- Replace fiber bundles when broken fiber percentage exceeds acceptable threshold
- Use polarization filtering or computational specular removal algorithms
- Use bright LED sources and adjust exposure/gain for adequate signal
- Calibrate and correct for bending-dependent distortion using test patterns
Forward-Model Mismatch Cases
- The widefield fallback produces a (64,64) image, but fiber-bundle endoscopy transmits images through discrete fiber cores creating a hexagonal pixelation pattern — output shape (n_fibers,) is a 1D vector of per-core intensities
- The fiber bundle imposes a fixed sampling grid (honeycomb structure) with inter-core crosstalk and dead fibers — the widefield continuous Gaussian blur has no relationship to the discrete fiber sampling and transmission physics
How to Correct the Mismatch
- Use the endoscopy operator that models per-fiber-core sampling: each of the ~10,000-100,000 cores transmits a point sample from the distal end to the proximal camera, with known core positions and transmission coefficients
- Reconstruct using fiber-core interpolation, honeycomb artifact removal, or deep-learning super-resolution that account for the known fiber bundle geometry and per-core response
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Lee & Bhatt, 'Fiber bundle endoscopy advances', J. Biophotonics 12, e201900004 (2019)
Canonical Datasets
- Kvasir-SEG (polyp segmentation)
- CVC-ClinicDB (colonoscopy)
- HyperKvasir (multi-class GI dataset)
Flash LiDAR
Flash LiDAR
Fluorescence Lifetime Imaging
Fluorescence lifetime imaging microscopy (FLIM) measures the exponential decay time of fluorescence emission at each pixel, providing contrast based on the molecular environment rather than intensity alone. In time-correlated single-photon counting (TCSPC), each detected photon is time-tagged relative to the excitation pulse, building a histogram of arrival times that is fitted to single- or multi-exponential decay models. The phasor approach provides a fit-free analysis in Fourier space. Primary challenges include low photon counts and instrument response function (IRF) deconvolution.
Fluorescence Lifetime Imaging
Description
Fluorescence lifetime imaging microscopy (FLIM) measures the exponential decay time of fluorescence emission at each pixel, providing contrast based on the molecular environment rather than intensity alone. In time-correlated single-photon counting (TCSPC), each detected photon is time-tagged relative to the excitation pulse, building a histogram of arrival times that is fitted to single- or multi-exponential decay models. The phasor approach provides a fit-free analysis in Fourier space. Primary challenges include low photon counts and instrument response function (IRF) deconvolution.
Principle
Fluorescence Lifetime Imaging measures the exponential decay time of fluorophore emission (typically 1-10 ns) rather than intensity. Lifetime is sensitive to the fluorophore's local chemical environment (pH, ion concentration, FRET) but independent of concentration and photobleaching. Detection uses either time-correlated single-photon counting (TCSPC) or frequency-domain phase/modulation methods.
How to Build the System
Add a pulsed laser source (ps diode laser or Ti:Sapphire, 40-80 MHz repetition rate) to a confocal or widefield microscope. For TCSPC, install single-photon counting detectors (hybrid PMTs or SPADs) with timing electronics (Becker & Hickl SPC-150/830 or PicoQuant TimeHarp). For widefield FLIM, use a gated or modulated camera (Lambert Instruments). Synchronize laser pulses with detector timing.
Common Reconstruction Algorithms
- Mono-exponential / bi-exponential tail fitting (least-squares or MLE)
- Phasor analysis (model-free lifetime decomposition)
- Global analysis (linked lifetime fitting across pixels)
- Bayesian lifetime estimation
- Deep-learning FLIM (FLIMnet, rapid lifetime prediction from few photons)
Common Mistakes
- Insufficient photon counts for reliable lifetime fitting (need ≥1000 photons/pixel)
- Ignoring instrument response function (IRF) convolution in the fit
- Using mono-exponential fit for multi-component decays, obtaining incorrect average lifetimes
- Pile-up effect at high count rates distorting the decay histogram
- Background autofluorescence contributing a long-lifetime component
How to Avoid Mistakes
- Collect sufficient photons; use longer acquisition or binning if needed
- Measure IRF with a scattering sample and convolve with the model in fitting
- Evaluate fit residuals; use bi-exponential or phasor if mono-exponential is poor
- Keep count rate below 1-5 % of the laser repetition rate to avoid pile-up
- Measure autofluorescence lifetime separately and include in the fit model
Forward-Model Mismatch Cases
- The widefield fallback produces a single 2D intensity image (64,64), but FLIM measures fluorescence lifetime decay at each pixel — output shape (64,64,64) includes the temporal decay dimension
- FLIM forward model is nonlinear (exponential decay convolved with IRF: y(t) = IRF * sum(a_i * exp(-t/tau_i))), while the widefield linear blur cannot represent lifetime information at all
How to Correct the Mismatch
- Use the FLIM operator that generates time-resolved fluorescence decay histograms at each pixel, including IRF convolution and multi-exponential decay components
- Reconstruct lifetimes using phasor analysis or exponential fitting on the temporal dimension; the correct forward model preserves the relationship between decay time and local chemical environment
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Becker, 'Advanced Time-Correlated Single Photon Counting Techniques', Springer (2005)
- Digman et al., 'The phasor approach to fluorescence lifetime imaging', Biophysical Journal 94, L14-L16 (2008)
Canonical Datasets
- FLIM-FRET standard sample datasets (Becker & Hickl)
- FLIM phasor benchmark (Digman lab)
Fluoroscopy
Fluoroscopy provides real-time continuous X-ray imaging for guiding interventional procedures. The forward model is the same Beer-Lambert projection as radiography but at much lower dose per frame (typically 1 uGy/frame at 15-30 fps) resulting in severely photon-limited images. Temporal redundancy from the video stream enables frame-to-frame denoising and recursive filtering. Primary challenges include low SNR, motion blur from patient/organ movement, and veiling glare from scatter.
Fluoroscopy
Description
Fluoroscopy provides real-time continuous X-ray imaging for guiding interventional procedures. The forward model is the same Beer-Lambert projection as radiography but at much lower dose per frame (typically 1 uGy/frame at 15-30 fps) resulting in severely photon-limited images. Temporal redundancy from the video stream enables frame-to-frame denoising and recursive filtering. Primary challenges include low SNR, motion blur from patient/organ movement, and veiling glare from scatter.
Principle
Fluoroscopy provides real-time continuous X-ray imaging for guiding interventional procedures. A pulsed or continuous X-ray beam produces live projection images at 7.5-30 fps on a flat-panel detector. The trade-off is between frame rate, radiation dose, and image quality. Temporal filtering and dose-saving modes reduce patient exposure while maintaining diagnostic quality.
How to Build the System
A C-arm fluoroscopy unit has an X-ray tube and flat-panel detector on a C-shaped gantry that can rotate around the patient. Modern systems use pulsed fluoroscopy (variable pulse rate 3.75-30 fps) with automatic brightness control. Install last-image-hold and virtual collimation features. Calibrate geometric distortion for 3-D cone-beam reconstruction capability. Regular dosimetry checks (DAP meter calibration) are mandatory.
Common Reconstruction Algorithms
- Recursive temporal averaging (IIR filtering for noise reduction)
- Contrast-enhanced subtraction (road-mapping for angiography)
- Motion-compensated temporal filtering
- Cone-beam CT reconstruction from rotational fluoroscopy runs
- Deep-learning frame interpolation for reduced pulse-rate operation
Common Mistakes
- Excessive radiation dose from unnecessarily high frame rate or continuous mode
- Image lag / ghosting from slow detector response at low dose
- Geometric distortion from C-arm flex not calibrated
- Scatter degrading contrast in lateral or oblique views of thick anatomy
- Patient skin dose exceeding threshold (2 Gy) during long procedures
How to Avoid Mistakes
- Use lowest acceptable pulse rate; employ last-image-hold instead of continuous fluoro
- Use fast flat-panel detectors (GOS or CsI with fast readout) to minimize lag
- Perform regular geometric calibration with a phantom for accurate 3D reconstruction
- Collimate tightly and use appropriate anti-scatter grids
- Monitor cumulative dose (DAP) and skin dose during procedures; rotate beam angles
Forward-Model Mismatch Cases
- The widefield fallback applies additive Gaussian blur, but fluoroscopy follows X-ray Beer-Lambert attenuation with real-time temporal dynamics — the exponential transmission model and dynamic contrast are absent
- Fluoroscopy operates at much lower dose rates than radiography, requiring modeling of quantum mottle (Poisson noise at very low photon counts) and image intensifier/flat-panel detector gain — the widefield noise model is wrong
How to Correct the Mismatch
- Use the fluoroscopy operator implementing real-time X-ray transmission: y = I_0 * exp(-A*x) with Poisson quantum noise, modeling the low-dose regime and detector response
- Apply temporal filtering (recursive averaging) or deep-learning denoising tuned for the correct Poisson noise level of fluoroscopic sequences
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Defined by IEC 62220-1 standard for fluoroscopy detector characterization
Canonical Datasets
- Clinical fluoroscopy sequences (institution-specific)
Focused Ion Beam SEM (FIB-SEM)
Focused Ion Beam SEM (FIB-SEM)
Fourier Ptychographic Microscopy
Fourier ptychographic microscopy (FPM) achieves a high space-bandwidth product by illuminating the sample from multiple angles using an LED array, capturing a set of low-resolution images, and computationally stitching them in Fourier space to synthesize a high-NA image with both amplitude and phase. Each LED angle shifts the sample's spatial frequency spectrum in Fourier space, and overlapping spectral regions provide redundancy for phase retrieval. The synthetic NA equals the objective NA plus the illumination NA. Reconstruction uses iterative phase retrieval algorithms (sequential or gradient-based).
Fourier Ptychographic Microscopy
Description
Fourier ptychographic microscopy (FPM) achieves a high space-bandwidth product by illuminating the sample from multiple angles using an LED array, capturing a set of low-resolution images, and computationally stitching them in Fourier space to synthesize a high-NA image with both amplitude and phase. Each LED angle shifts the sample's spatial frequency spectrum in Fourier space, and overlapping spectral regions provide redundancy for phase retrieval. The synthetic NA equals the objective NA plus the illumination NA. Reconstruction uses iterative phase retrieval algorithms (sequential or gradient-based).
Principle
Fourier Ptychographic Microscopy synthetically increases the NA of a low-magnification objective by illuminating the sample from multiple angles (LED array) and computationally stitching together the resulting images in Fourier space. Each LED angle shifts the sample spectrum so different spatial-frequency bands enter the objective pupil, allowing recovery of both amplitude and phase at high resolution over a large field of view.
How to Build the System
Replace the microscope condenser with a programmable LED matrix (e.g., 32×32 RGB LED array, ~4 mm pitch, placed ~80 mm above the sample). Use a low-magnification objective (4-10×, 0.1-0.3 NA) for large FOV. Acquire one image per LED (typically 100-300 images for the full matrix). Precise knowledge of LED positions is required for Fourier-space stitching.
Common Reconstruction Algorithms
- Alternating projection (Gerchberg-Saxton style in Fourier space)
- Embedded pupil function recovery (joint sample + aberration estimation)
- Wirtinger gradient descent with total-variation regularization
- Neural network-accelerated FPM (learned initialization + refinement)
- Multiplexed FPM (multiple LEDs simultaneously for faster acquisition)
Common Mistakes
- Inaccurate LED position calibration causing ghosting and resolution loss
- Insufficient overlap between Fourier-space patches (need ≥60 % overlap)
- Ignoring pupil aberrations of the low-NA objective
- LED intensity non-uniformity not corrected across the array
- Vibration or sample drift between sequential LED acquisitions
How to Avoid Mistakes
- Calibrate LED positions using a self-calibration algorithm or known test target
- Ensure adequate angular spacing to maintain >60% Fourier overlap between adjacent LEDs
- Use embedded pupil recovery to jointly estimate and correct aberrations
- Normalize LED intensities with a blank-sample calibration acquisition
- Stabilize the setup mechanically; use fast cameras to minimize inter-frame drift
Forward-Model Mismatch Cases
- The widefield fallback produces a single (64,64) image, but FPM acquires 25+ images from different LED illumination angles — output shape (25,16,16) captures distinct spatial-frequency bands for each angle
- FPM is fundamentally nonlinear (intensity = |F^-1{P * F{O * exp(i*k_led*r)}}|^2) — the widefield linear blur cannot model the coherent pupil filtering and phase recovery that enables synthetic aperture
How to Correct the Mismatch
- Use the FPM operator that generates one low-resolution intensity image per LED angle, each capturing a different region of the sample's Fourier spectrum shifted by the illumination wavevector
- Reconstruct using alternating projection (Gerchberg-Saxton in Fourier space) or embedded pupil recovery, which require the correct coherent forward model with known LED positions
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Zheng et al., 'Wide-field, high-resolution Fourier ptychographic microscopy', Nature Photonics 7, 739-745 (2013)
- Tian & Waller, 'Quantitative differential phase contrast imaging in an LED array microscope', Optics Express 23, 11394-11403 (2015)
Canonical Datasets
- Zheng lab FPM datasets (UCONN)
- Waller lab FPM benchmark data (Berkeley)
FTIR Spectroscopic Imaging
FTIR Spectroscopic Imaging
Full-Waveform Inversion (FWI)
Full-Waveform Inversion (FWI)
Functional MRI (BOLD)
Functional MRI detects neural activity indirectly via the blood-oxygen-level dependent (BOLD) contrast mechanism. Active brain regions increase local blood flow and oxygenation, altering the ratio of diamagnetic oxyhemoglobin to paramagnetic deoxyhemoglobin, causing T2* signal changes of 1-5%. Data is acquired with fast gradient-echo EPI sequences at high temporal resolution (TR 0.5-2s). The forward model includes the hemodynamic response function (HRF) convolved with neural activity. Primary challenges include physiological noise, head motion, and the low CNR of the BOLD signal.
Functional MRI (BOLD)
Description
Functional MRI detects neural activity indirectly via the blood-oxygen-level dependent (BOLD) contrast mechanism. Active brain regions increase local blood flow and oxygenation, altering the ratio of diamagnetic oxyhemoglobin to paramagnetic deoxyhemoglobin, causing T2* signal changes of 1-5%. Data is acquired with fast gradient-echo EPI sequences at high temporal resolution (TR 0.5-2s). The forward model includes the hemodynamic response function (HRF) convolved with neural activity. Primary challenges include physiological noise, head motion, and the low CNR of the BOLD signal.
Principle
Functional MRI detects brain activity indirectly through the Blood Oxygen Level Dependent (BOLD) contrast mechanism. Neural activity increases local blood flow and oxygenation, changing the ratio of diamagnetic oxyhemoglobin to paramagnetic deoxyhemoglobin. This alters the local T2* relaxation time, producing a small (~1-5 %) signal change detectable by gradient-echo EPI sequences acquired rapidly at whole-brain coverage.
How to Build the System
Use a 3T MRI scanner with a 32-64 channel head coil. Acquire multi-band (simultaneous multi-slice) gradient-echo EPI sequences (TR 0.5-1.5 s, TE ~30 ms, 2 mm isotropic voxels, multiband factor 4-8). Include a high-resolution T1w structural scan for registration. Physiological monitoring (pulse oximetry, respiratory bellows) enables noise regression. Use foam padding to minimize head motion.
Common Reconstruction Algorithms
- General Linear Model (GLM) for task-based fMRI (FSL FEAT, SPM)
- ICA (Independent Component Analysis) for resting-state networks
- Seed-based functional connectivity analysis
- Motion correction and nuisance regression (6-parameter rigid body + CompCor)
- Deep-learning denoising and parcellation (BrainNetCNN, fMRIPrep pipeline)
Common Mistakes
- Excessive head motion causing false activations or connectivity artifacts
- Not correcting for physiological noise (cardiac, respiratory) in the signal
- Insufficient statistical correction for multiple comparisons (inflated false positives)
- Using too long a TR, missing the hemodynamic response in fast event-related designs
- Geometric distortion in EPI not corrected before registration to structural scan
How to Avoid Mistakes
- Use prospective motion correction and strict motion exclusion criteria (<0.5 mm FD)
- Acquire and regress physiological signals; use ICA-based denoising (ICA-AROMA)
- Apply proper multiple-comparison correction (FWE, FDR, cluster-based thresholding)
- Use multiband EPI for sub-second TR to adequately sample the HRF
- Acquire field maps (B₀) and apply distortion correction (topup, fieldmap-based)
Forward-Model Mismatch Cases
- The widefield fallback applies spatial Gaussian blur, but fMRI measures the BOLD (Blood Oxygen Level Dependent) signal via T2*-weighted MRI — the hemodynamic response function (HRF) convolution with neural activity is completely absent
- fMRI acquisition occurs in k-space (Fourier domain) with EPI readout, and the signal of interest is a tiny (~1-5%) temporal modulation — the widefield spatial blur cannot model the temporal hemodynamic dynamics or k-space encoding
How to Correct the Mismatch
- Use the fMRI operator that models BOLD signal generation: y(t) = FFT_acquisition(x_baseline * (1 + delta_BOLD(t))), where delta_BOLD = HRF * neural_activity encodes brain activation
- Analyze using GLM (general linear model) with the hemodynamic response function, or ICA/connectivity analysis, applied to correctly modeled time-series MRI data
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Ogawa et al., 'Brain magnetic resonance imaging with contrast dependent on blood oxygenation', PNAS 87, 9868-9872 (1990)
- Glasser et al., 'The minimal preprocessing pipelines for the Human Connectome Project', NeuroImage 80, 105-124 (2013)
Canonical Datasets
- Human Connectome Project (HCP) 3T (1200 subjects)
- UK Biobank brain imaging
Functional Near-Infrared Spectroscopy (fNIRS)
Functional Near-Infrared Spectroscopy (fNIRS)
Fundus Camera
A fundus camera captures a 2D color photograph of the retinal surface by illuminating the fundus through the pupil with a ring-shaped flash and imaging the reflected light through the central pupillary zone. The optical system images the curved retina onto a flat detector with 30-50 degree field of view. Image quality is degraded by media opacities (cataract), small pupil, and uneven illumination. Fundus images are widely used for automated screening of diabetic retinopathy, glaucoma, and AMD via deep learning.
Fundus Camera
Description
A fundus camera captures a 2D color photograph of the retinal surface by illuminating the fundus through the pupil with a ring-shaped flash and imaging the reflected light through the central pupillary zone. The optical system images the curved retina onto a flat detector with 30-50 degree field of view. Image quality is degraded by media opacities (cataract), small pupil, and uneven illumination. Fundus images are widely used for automated screening of diabetic retinopathy, glaucoma, and AMD via deep learning.
Principle
A fundus camera images the posterior segment of the eye (retina, optic disc, macula, vasculature) by illuminating the retina through the pupil and capturing the reflected/backscattered light. The optical path is designed to separate illumination and observation through different portions of the pupil to avoid corneal reflections. Standard fundus imaging provides 30-50° field-of-view color photographs of the retina.
How to Build the System
Use a dedicated fundus camera (e.g., Topcon TRC-NW400, Canon CR-2 AF) or a scanning laser ophthalmoscope (Optos for widefield). Dilate the patient's pupil (tropicamide 1%) for standard fundus photography. Align the camera to center on the macula or optic disc. Set appropriate flash intensity and focus. Capture color and red-free (green channel) images. For fluorescein angiography, inject sodium fluorescein IV and capture timed image series with excitation/barrier filters.
Common Reconstruction Algorithms
- Image quality assessment and auto-focus/auto-exposure
- Vessel segmentation (U-Net, DeepVessel)
- Optic disc and cup segmentation for glaucoma screening
- Diabetic retinopathy grading (deep-learning classifiers)
- Multi-frame averaging and super-resolution for fundus images
Common Mistakes
- Insufficient pupil dilation causing vignetting at the field edges
- Corneal reflections (flare) obscuring the central retinal image
- Image out of focus due to refractive error not compensated
- Eyelash or eyelid obstruction in the image
- Uneven illumination across the retinal image
How to Avoid Mistakes
- Ensure adequate mydriasis (>5 mm pupil diameter) before imaging
- Align the camera carefully to separate illumination and observation through different pupil zones
- Use auto-focus and compensate for patient refractive error in the camera optics
- Ask patients to open eyes wide; use a fixation target for gaze direction
- Verify uniform illumination before capture; adjust camera alignment if uneven
Forward-Model Mismatch Cases
- The widefield fallback applies a generic Gaussian PSF, but fundus imaging has a unique optical path through the eye's optics (cornea and lens) with specific aberrations and the pupil-splitting illumination/observation geometry
- The retinal image is formed after double-pass through the ocular media, with wavelength-dependent absorption (hemoglobin, melanin, macular pigment) — the widefield achromatic Gaussian blur cannot model spectral absorption or ocular aberrations
How to Correct the Mismatch
- Use the fundus operator that models the eye's optical path: illumination through one pupil zone, retinal reflection/fluorescence, and collection through a separate pupil zone, with ocular aberration and media absorption
- Include wavelength-dependent retinal reflectance for color fundus imaging, or fluorescein excitation/emission model for fluorescein angiography
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Gulshan et al., 'Development and validation of a deep learning algorithm for detection of diabetic retinopathy', JAMA 316, 2402 (2016)
- Staal et al., 'Ridge-based vessel segmentation (DRIVE)', IEEE TMI 23, 501 (2004)
Canonical Datasets
- EyePACS (diabetic retinopathy screening)
- DRIVE (Digital Retinal Images for Vessel Extraction)
- MESSIDOR-2
- APTOS 2019 Blindness Detection
Generic Compressive Matrix Sensing
Generic compressive sensing framework where the measurement process is modelled as y = A*x + n with A being an explicit M x N sensing matrix (M < N). This covers any linear inverse problem including random Gaussian, Bernoulli, or structured sensing matrices. The compressed sensing theory of Candes, Romberg, and Tao guarantees exact recovery when x is sparse and A satisfies the restricted isometry property (RIP). Reconstruction uses standard proximal algorithms (FISTA, ADMM) with sparsity-promoting regularizers (L1, TV, wavelet).
Generic Compressive Matrix Sensing
Description
Generic compressive sensing framework where the measurement process is modelled as y = A*x + n with A being an explicit M x N sensing matrix (M < N). This covers any linear inverse problem including random Gaussian, Bernoulli, or structured sensing matrices. The compressed sensing theory of Candes, Romberg, and Tao guarantees exact recovery when x is sparse and A satisfies the restricted isometry property (RIP). Reconstruction uses standard proximal algorithms (FISTA, ADMM) with sparsity-promoting regularizers (L1, TV, wavelet).
Principle
Generic matrix sensing models the forward process as y = Ax + n, where A is an arbitrary measurement matrix (not necessarily structured like a convolution or Radon transform). This is the most general compressive sensing framework, applicable to random projections, coded apertures, and any linear dimensionality reduction scheme. The key requirement is that A satisfies the Restricted Isometry Property (RIP) for successful sparse recovery.
How to Build the System
Implementation depends on the physical sensing modality. For optical random projections, use a DMD or scattering medium to implement pseudo-random measurement vectors. Calibrate the measurement matrix A by measuring the system response to a complete basis set (e.g., Hadamard patterns). Store A as a dense or structured matrix. Ensure the measurement SNR is adequate for the desired reconstruction quality.
Common Reconstruction Algorithms
- ISTA / FISTA (Iterative Shrinkage-Thresholding Algorithm)
- Basis pursuit (L1 minimization via linear programming)
- AMP (Approximate Message Passing)
- ADMM with various regularizers (TV, wavelet sparsity, low-rank)
- Learned ISTA (LISTA) and other deep unfolding networks
Common Mistakes
- Measurement matrix does not satisfy RIP (too coherent or poorly conditioned)
- Mismatch between calibrated A and actual system behavior (model error)
- Not accounting for measurement noise level when setting regularization strength
- Using an insufficiently sparse signal model for the reconstruction
- Ignoring quantization effects of the detector in the measurement model
How to Avoid Mistakes
- Verify the condition number and coherence of A; use random or optimized designs
- Re-calibrate A periodically to account for system drift
- Set regularization parameter proportional to noise level (e.g., via cross-validation)
- Validate sparsity assumption on representative signals before deploying CS
- Include quantization noise in the forward model or use dithering techniques
Forward-Model Mismatch Cases
- The widefield fallback applies a Gaussian blur (shape-preserving convolution), but the correct compressed sensing operator applies a random measurement matrix y = Phi*x that projects the image into a lower-dimensional space
- Gaussian blur preserves spatial locality and image structure, whereas the random measurement matrix scrambles all spatial information — the fallback measurements contain no compressed-sensing-compatible encoding
How to Correct the Mismatch
- Use the correct compressed sensing operator with the measurement matrix Phi (Gaussian random, partial Fourier, or structured random), producing y = Phi * vec(x)
- Reconstruct using L1/TV-regularized optimization (ISTA, ADMM) or learned proximal operators designed for the specific measurement matrix structure
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Candes et al., 'Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information', IEEE TIT 52, 489-509 (2006)
- Donoho, 'Compressed sensing', IEEE TIT 52, 1289-1306 (2006)
Canonical Datasets
- Set11 / BSD68 (simulation benchmarks)
Ghost Imaging
Ghost Imaging
Gravitational Wave Detection
Gravitational Wave Detection
Ground-Penetrating Radar (GPR)
Ground-Penetrating Radar (GPR)
High Dynamic Range (HDR) Imaging
High Dynamic Range (HDR) Imaging
Hyperspectral Remote Sensing
Hyperspectral Remote Sensing
Image Scanning Microscopy (ISM)
Image Scanning Microscopy (ISM)
Industrial CT
Industrial CT
Integral Photography
Integral photography (IP), originally proposed by Lippmann in 1908, captures a light field using a fly-eye lens array (matrix of small lenses) where each lenslet records a small elemental image from a slightly different perspective. The array of elemental images encodes 3D scene information, enabling computational refocusing, depth estimation, and autostereoscopic 3D display. Compared to microlens-based plenoptic cameras, IP typically uses larger lenslets with correspondingly more pixels per lens. Reconstruction includes depth-from-correspondence between elemental images and 3D focal stack computation.
Integral Photography
Description
Integral photography (IP), originally proposed by Lippmann in 1908, captures a light field using a fly-eye lens array (matrix of small lenses) where each lenslet records a small elemental image from a slightly different perspective. The array of elemental images encodes 3D scene information, enabling computational refocusing, depth estimation, and autostereoscopic 3D display. Compared to microlens-based plenoptic cameras, IP typically uses larger lenslets with correspondingly more pixels per lens. Reconstruction includes depth-from-correspondence between elemental images and 3D focal stack computation.
Principle
Integral photography (also known as integral imaging) uses a 2-D array of elemental lenses to capture multi-perspective views of a 3-D scene simultaneously. Each elemental lens records a small perspective image, and the full set encodes the 4-D light field. Computational reconstruction produces 3-D images that can be viewed from different angles or refocused without glasses.
How to Build the System
Place a 2-D microlens or lenslet array (pitch 0.5-1 mm, ~50-200 elements per side) at one focal length from a high-resolution sensor. Each lenslet forms a separate elemental image. For display: show the integral image on a high-resolution display with a matched output lenslet array. Calibrate lenslet grid alignment, individual lens focal lengths, and vignetting correction. Use telecentric imaging for uniform magnification.
Common Reconstruction Algorithms
- Computational refocusing via pixel rearrangement and summation
- Depth estimation from elemental image disparity analysis
- 3-D scene reconstruction from integral images
- Super-resolution integral imaging (combining multiple shifted captures)
- Deep-learning integral image reconstruction and view synthesis
Common Mistakes
- Lenslet array not properly aligned with the sensor pixel grid
- Insufficient number of elemental lenses for the desired depth range
- Crosstalk between adjacent elemental images due to lens aberrations
- Not correcting for vignetting variations across the lenslet array
- Pseudoscopic (depth-reversed) images if reconstruction is not properly handled
How to Avoid Mistakes
- Align lenslet array to sensor with precision jigs and verify with calibration patterns
- Design lenslet pitch and focal length for the required depth-of-field
- Use high-quality molded lenslets and baffles to minimize crosstalk
- Apply per-lenslet calibration including vignetting and distortion correction
- Use computational depth inversion to correct pseudoscopic effects
Forward-Model Mismatch Cases
- The widefield fallback produces a single-perspective blurred image, but integral imaging captures multiple sub-aperture views through a lenslet array — each elemental image sees the scene from a slightly different angle
- Without the lenslet-array angular encoding, depth information (parallax between views) is lost — computational refocusing and 3D reconstruction from the fallback output are impossible
How to Correct the Mismatch
- Use the integral imaging operator that models the lenslet array: each microlens captures a different angular perspective, encoding the 4D light field on the 2D sensor
- Reconstruct depth maps via disparity estimation between elemental images, and perform computational refocusing using pixel rearrangement and summation across sub-aperture views
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Lippmann, C. R. Acad. Sci. Paris 146, 446 (1908)
- Park et al., 'Recent progress in 3D imaging systems', J. Opt. Soc. Am. A 26, 2538 (2009)
Canonical Datasets
- ETRI integral imaging test set
- Middlebury multi-view stereo (adapted)
Interferometric SAR (InSAR)
Interferometric SAR (InSAR)
Intravascular Ultrasound (IVUS)
Intravascular Ultrasound (IVUS)
Laser-Induced Breakdown Spectroscopy (LIBS) Imaging
Laser-Induced Breakdown Spectroscopy (LIBS) Imaging
Lattice Light-Sheet Microscopy
Lattice Light-Sheet Microscopy
Lensless (Diffuser Camera) Imaging
Lensless imaging replaces the objective lens with a thin optical element (phase diffuser or coded mask) placed directly near the sensor. Scene light produces a multiplexed caustic pattern encoding the entire scene. The forward model is y = H * x + n where H is determined by the mask's phase profile and mask-to-sensor distance. Each scene point contributes across many sensor pixels, yielding a multiplexing advantage. Reconstruction solves a large-scale inverse problem via ADMM or FISTA with total-variation or learned priors.
Lensless (Diffuser Camera) Imaging
Description
Lensless imaging replaces the objective lens with a thin optical element (phase diffuser or coded mask) placed directly near the sensor. Scene light produces a multiplexed caustic pattern encoding the entire scene. The forward model is y = H * x + n where H is determined by the mask's phase profile and mask-to-sensor distance. Each scene point contributes across many sensor pixels, yielding a multiplexing advantage. Reconstruction solves a large-scale inverse problem via ADMM or FISTA with total-variation or learned priors.
Principle
Lensless (diffuser-cam) imaging replaces the imaging lens with a thin diffuser or coded mask placed directly before the sensor. The sensor records a multiplexed pattern (caustic or speckle) that encodes the 3-D scene. Computational reconstruction inverts the known point-spread function of the diffuser to recover the image, enabling an extremely compact, lightweight camera suitable for miniaturized or in-vivo applications.
How to Build the System
Place a thin diffuser (ground glass, engineered phase mask, or Scotch tape) at a fixed, small distance (~1-5 mm) from a bare sensor (CMOS, e.g., Sony IMX sensor). Precisely characterize the diffuser PSF by scanning a point source across the field of view. Mount rigidly to prevent any relative motion between diffuser and sensor. For 3-D reconstruction, the depth-dependent PSF must be calibrated at multiple axial planes.
Common Reconstruction Algorithms
- ADMM (alternating direction method of multipliers) with TV regularization
- Wiener deconvolution (fast, single-step but lower quality)
- Gradient descent with learned priors (DiffuserCam, neural network prior)
- Tikhonov-regularized least squares
- Unrolled optimization networks (physics-informed deep learning)
Common Mistakes
- Inaccurate PSF calibration causing reconstruction artifacts
- Insufficient sensor dynamic range for the caustic intensity peaks
- Motion between diffuser and sensor during capture invalidating the PSF model
- Regularization too strong, over-smoothing fine details in the reconstruction
- Ignoring the depth-dependence of the PSF when imaging 3-D scenes
How to Avoid Mistakes
- Calibrate PSF carefully with a point source at the exact sample distance
- Use HDR acquisition or high-bit-depth sensors to capture full caustic range
- Rigidly bond the diffuser to the sensor; verify alignment stability
- Tune regularization weight (e.g., via L-curve or cross-validation)
- Calibrate PSF at multiple depths for 3-D scenes; use depth-varying reconstruction
Forward-Model Mismatch Cases
- The widefield fallback uses a Gaussian PSF, but lensless cameras use a coded aperture (phase mask, diffuser, or amplitude mask) that creates a highly structured, non-Gaussian PSF — the caustic pattern is fundamentally different from a Gaussian
- The lensless PSF encodes the scene through a known, shift-variant pattern — the widefield shift-invariant Gaussian blur does not capture the scene-dependent structure of the lensless measurement and produces incorrect reconstruction input
How to Correct the Mismatch
- Use the lensless operator with the calibrated PSF of the specific coded aperture (measured from a point source or computed from the mask design): y = H * x, where H is the non-Gaussian, possibly shift-variant PSF
- Reconstruct using Wiener deconvolution, ADMM with TV prior, or learned methods (FlatNet, PhlatCam) that use the correct coded-aperture PSF for the specific mask in use
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Antipa et al., 'DiffuserCam: lensless single-exposure 3D imaging', Optica 5, 1-9 (2018)
- Asif et al., 'FlatCam: Thin, Lensless Cameras Using Coded Aperture', IEEE TCI 3, 384-397 (2017)
Canonical Datasets
- DiffuserCam lensless mirflickr dataset (Monakhova et al.)
- PhlatCam benchmark (Boominathan et al., IEEE TPAMI 2022)
LiDAR Scanner
LiDAR (Light Detection and Ranging) measures distances by emitting laser pulses and timing the round-trip to the reflecting surface. Automotive LiDAR systems use rotating multi-beam scanners (e.g., Velodyne HDL-64E) or solid-state flash LiDAR to acquire 3D point clouds at 10-20 Hz. The forward model is simple time-of-flight: d = c*t/2. The resulting sparse point cloud requires densification, ground segmentation, and object detection. Primary challenges include sparse sampling, intensity variation with surface reflectivity, and rain/fog attenuation.
LiDAR Scanner
Description
LiDAR (Light Detection and Ranging) measures distances by emitting laser pulses and timing the round-trip to the reflecting surface. Automotive LiDAR systems use rotating multi-beam scanners (e.g., Velodyne HDL-64E) or solid-state flash LiDAR to acquire 3D point clouds at 10-20 Hz. The forward model is simple time-of-flight: d = c*t/2. The resulting sparse point cloud requires densification, ground segmentation, and object detection. Primary challenges include sparse sampling, intensity variation with surface reflectivity, and rain/fog attenuation.
Principle
Light Detection and Ranging (LiDAR) measures distances by emitting laser pulses (905 nm or 1550 nm) and timing their return after reflection from the scene (time-of-flight: d = c·t/2). A scanning mechanism (rotating mirror, MEMS, or optical phased array) sweeps the beam to build a 3-D point cloud of the environment. Resolution depends on the beam divergence, scanning density, and pulse timing precision.
How to Build the System
Select a LiDAR sensor appropriate for the application: mechanical spinning (Velodyne VLP-16/128 for autonomous vehicles), solid-state (Livox, Ouster), or airborne (Leica ALS80 for terrain mapping). Mount rigidly and combine with an IMU and GNSS for georeferencing. Calibrate intrinsic parameters (beam angles, timing offsets, intensity response) and extrinsics (relative to vehicle coordinate frame). Process returns: first/last/full waveform for different applications.
Common Reconstruction Algorithms
- Point cloud registration (ICP, NDT for multi-scan alignment)
- Ground filtering and classification (progressive morphological filter)
- SLAM (Simultaneous Localization and Mapping) with LiDAR
- Object detection and segmentation (PointNet, PointPillars)
- Surface reconstruction from point clouds (Poisson, ball-pivoting)
Common Mistakes
- Multi-echo / multi-path reflections causing ghost points
- Motion distortion in the point cloud from vehicle movement during one scan rotation
- Incorrect calibration causing misalignment between LiDAR and camera data
- Rain, fog, or dust causing false returns and reduced range
- Near-range blind zone where the receiver is not sensitive to returns
How to Avoid Mistakes
- Filter ghost points using intensity thresholds and multi-return analysis
- Apply ego-motion compensation using IMU data to deskew each scan
- Perform target-based or targetless calibration between LiDAR and other sensors
- Use 1550 nm wavelength (eye-safe and less affected by rain) for outdoor applications
- Account for minimum range specification; fuse with short-range sensors if needed
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) image, but LiDAR produces a 1D or 3D point cloud of range measurements (r_i = c*t_i/2) — the output is a set of (x,y,z) points, not a blurred image
- LiDAR measures distance by timing laser pulse round-trips, with angular scanning determining direction — the widefield spatial blur has no connection to time-of-flight distance measurement or angular scanning geometry
How to Correct the Mismatch
- Use the LiDAR operator that models pulsed laser emission, scene reflection (surface albedo and geometry), and time-of-flight detection: range = c*delta_t/2 for each beam direction
- Process the point cloud using registration (ICP), ground classification, or object detection algorithms that operate on the correct 3D range measurement format
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Geiger et al., 'Are we ready for autonomous driving? The KITTI vision benchmark suite', CVPR 2012
Canonical Datasets
- KITTI 3D object detection
- nuScenes (1000 driving scenes)
- Waymo Open Dataset
Light Field Imaging
Light field imaging captures the full 4D radiance function L(x,y,u,v) describing both spatial position (x,y) and angular direction (u,v) of light rays. A microlens array placed before the sensor captures multiple sub-aperture views simultaneously, enabling post-capture refocusing, depth estimation, and perspective shifts. Each microlens images the objective's exit pupil, trading spatial resolution for angular resolution. The 4D light field can be processed with shift-and-sum for refocusing, disparity estimation for depth, or epipolar-plane image (EPI) analysis. Primary challenges include the inherent spatial-angular resolution tradeoff and microlens aberrations.
Light Field Imaging
Description
Light field imaging captures the full 4D radiance function L(x,y,u,v) describing both spatial position (x,y) and angular direction (u,v) of light rays. A microlens array placed before the sensor captures multiple sub-aperture views simultaneously, enabling post-capture refocusing, depth estimation, and perspective shifts. Each microlens images the objective's exit pupil, trading spatial resolution for angular resolution. The 4D light field can be processed with shift-and-sum for refocusing, disparity estimation for depth, or epipolar-plane image (EPI) analysis. Primary challenges include the inherent spatial-angular resolution tradeoff and microlens aberrations.
Principle
Light-field imaging captures both the spatial position and direction of light rays in a scene, recording a 4-D light field L(u,v,s,t) where (u,v) parameterize the aperture and (s,t) parameterize the spatial position. This enables computational refocusing, depth estimation, and novel viewpoint synthesis from a single capture. A microlens array placed before the sensor trades spatial resolution for angular resolution.
How to Build the System
Place a microlens array (MLA) at the sensor plane of a camera, one focal length in front of the image sensor. Each microlens captures the angular distribution of light from a corresponding spatial position (Lytro-style plenoptic camera). Alternative: use a camera array (e.g., 4×4 or 8×8 synchronized cameras) for higher angular and spatial resolution. Calibrate MLA alignment, microlens pitch, and main lens parameters.
Common Reconstruction Algorithms
- Shift-and-sum refocusing (synthetic aperture)
- Depth estimation from disparity between sub-aperture images
- Fourier slice theorem for light-field refocusing
- Light-field super-resolution (recovering spatial resolution lost to MLA)
- Deep-learning view synthesis (light field reconstruction from sparse views)
Common Mistakes
- Microlens array misaligned with sensor pixels, causing vignetting and crosstalk
- Insufficient angular samples for accurate depth estimation in textureless regions
- Not calibrating MLA-to-sensor alignment, producing decoding artifacts
- Confusing spatial and angular resolution trade-off limits of the plenoptic design
- Ignoring diffraction effects at the microlens apertures
How to Avoid Mistakes
- Precisely align MLA to sensor with sub-pixel accuracy; use calibration targets
- Increase camera array density or use coded-aperture techniques for more angular samples
- Calibrate using a white image and point-source images for precise microlens grid mapping
- Design the system with the desired spatial-angular trade-off explicitly computed
- Use microlens diameters larger than the diffraction limit (> 10× wavelength)
Forward-Model Mismatch Cases
- The widefield fallback produces a single (64,64) image, but a light field camera captures both spatial and angular information via a microlens array — the output encodes multiple sub-aperture views for computational refocusing
- Without the angular dimension (directions of light rays), depth estimation from parallax and computational refocusing are impossible — the widefield model captures only a single perspective
How to Correct the Mismatch
- Use the light field operator that models the microlens array: each microlens captures light from different angular directions, producing an (x, y, u, v) 4D light field on the 2D sensor
- Reconstruct depth maps from sub-aperture disparity, perform computational refocusing via shift-and-sum, or apply light-field super-resolution to trade angular for spatial resolution
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Levoy & Hanrahan, 'Light field rendering', SIGGRAPH 1996
- Ng et al., 'Light field photography with a hand-held plenoptic camera', Stanford Tech Report CTSR 2005-02
Canonical Datasets
- HCI 4D Light Field Benchmark
- Stanford Lego Gantry Archive
- INRIA Lytro Light Field Dataset
Light-Sheet Fluorescence Microscopy
Light-sheet microscopy (LSFM / SPIM) illuminates the sample with a thin sheet of light perpendicular to the detection axis, providing intrinsic optical sectioning. Primary artifacts are stripe patterns caused by absorption and scattering in the illumination path, plus anisotropic PSF blur. The forward model is y = S(z) * (PSF_3d *** x) + n where S(z) models the stripe attenuation. Reconstruction involves destriping followed by optional deconvolution.
Light-Sheet Fluorescence Microscopy
Description
Light-sheet microscopy (LSFM / SPIM) illuminates the sample with a thin sheet of light perpendicular to the detection axis, providing intrinsic optical sectioning. Primary artifacts are stripe patterns caused by absorption and scattering in the illumination path, plus anisotropic PSF blur. The forward model is y = S(z) * (PSF_3d *** x) + n where S(z) models the stripe attenuation. Reconstruction involves destriping followed by optional deconvolution.
Principle
A thin sheet of laser light illuminates only the focal plane of the detection objective, providing intrinsic optical sectioning with minimal out-of-plane photobleaching. The orthogonal geometry between illumination and detection decouples sectioning from resolution. Detection is widefield, enabling fast volumetric imaging of large specimens.
How to Build the System
Arrange two orthogonal objective arms: one for the excitation sheet (cylindrical lens or digitally scanned Gaussian/Bessel beam) and one for detection (high-NA water-dipping). Mount the sample in agarose or hold in a chamber compatible with the dual-objective geometry. Use a fast sCMOS camera for detection. Stage scanning or sheet scanning acquires z-stacks. Consider diSPIM (dual-view) for isotropic resolution.
Common Reconstruction Algorithms
- Multi-view fusion (weighted averaging of complementary views)
- Multi-view deconvolution (Bayesian, joint Richardson-Lucy)
- Content-based image fusion
- Deep-learning denoising for high-speed acquisitions (CARE)
- Stripe artifact removal (wavelet-FFT filtering)
Common Mistakes
- Light sheet too thick, degrading axial resolution and sectioning
- Absorption and scattering in thick tissue causing shadow artifacts (stripes)
- Misalignment between sheet focal plane and detection focal plane
- Improper sample mounting causing drift or deformation during long acquisitions
- Ignoring refractive-index variations causing sheet deflection inside tissue
How to Avoid Mistakes
- Use Bessel or lattice light sheet for thin, uniform illumination profiles
- Pivot the light sheet or use dual-side illumination to reduce shadow artifacts
- Carefully co-align illumination and detection planes using fluorescent beads
- Use stable, low-melting-point agarose embedding and vibration-isolated stages
- Clear or match refractive index of tissue where possible; use adaptive optics
Forward-Model Mismatch Cases
- The widefield fallback processes only 2D (64,64) images, but light-sheet microscopy acquires 3D volumes (64,64,32) with intrinsic optical sectioning — the volumetric z-dimension is entirely lost
- Widefield illumination excites the entire sample volume causing out-of-focus blur, whereas the light sheet illuminates only the focal plane — the fallback forward model includes fluorescence contributions from planes that the real system never excites
How to Correct the Mismatch
- Use the lightsheet operator that processes 3D volumes with the sheet illumination profile: each z-slice is excited only by the thin (1-5 um) light sheet
- Model the sheet thickness and propagation (Gaussian or Bessel beam) explicitly; for multi-view systems, include the detection PSF from the orthogonal objective
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Huisken et al., 'Optical sectioning deep inside live embryos by SPIM', Science 305, 1007-1009 (2004)
- Power & Bhatt, 'A guide to light-sheet fluorescence microscopy for multiscale imaging', Nature Methods 14, 360-373 (2017)
Canonical Datasets
- OpenSPIM sample datasets
- Zebrafish developmental lightsheet atlas
Low-Dose Widefield Microscopy
Widefield fluorescence microscopy operated at very low illumination power or short exposure time to reduce phototoxicity and photobleaching in live specimens. Images are dominated by shot noise (Poisson) and read noise (Gaussian) with typical photon counts of 20-200 per pixel. The forward model is y = Poisson(alpha * PSF ** x)/alpha + N(0, sigma^2) where alpha is the photon conversion factor. Reconstruction requires joint denoising and deconvolution using PnP-HQS, Noise2Void, or CARE.
Low-Dose Widefield Microscopy
Description
Widefield fluorescence microscopy operated at very low illumination power or short exposure time to reduce phototoxicity and photobleaching in live specimens. Images are dominated by shot noise (Poisson) and read noise (Gaussian) with typical photon counts of 20-200 per pixel. The forward model is y = Poisson(alpha * PSF ** x)/alpha + N(0, sigma^2) where alpha is the photon conversion factor. Reconstruction requires joint denoising and deconvolution using PnP-HQS, Noise2Void, or CARE.
Principle
Identical optical path to standard widefield but operated at very low photon budgets (short exposure or attenuated excitation) to minimize phototoxicity in live cells. The acquired images are severely photon-starved, making Poisson noise the dominant degradation rather than out-of-focus blur.
How to Build the System
Use the same widefield microscope but reduce LED power to 1-5 % and/or shorten exposure to 5-20 ms. A high-QE back-illuminated sCMOS sensor (>80 % QE) is essential for capturing the limited photon signal. Install an environmental chamber for live-cell stability (37 °C, 5 % CO₂). Validate that the camera read noise floor is well below the expected signal.
Common Reconstruction Algorithms
- CARE (Content-Aware image REstoration)
- Noise2Void / Noise2Self (self-supervised denoising)
- BM3D / VST + BM3D for Poisson-Gaussian denoising
- PURE-LET (Poisson Unbiased Risk Estimator)
- Noise2Noise paired denoising networks
Common Mistakes
- Setting read-noise-dominated regime by using too-low gain or old CCD
- Training denoising networks on data with different noise statistics than test data
- Clipping near-zero intensities by incorrect camera offset subtraction
- Ignoring sCMOS pixel-dependent noise (fixed-pattern noise)
- Exceeding live-cell phototoxicity budget despite intending low-dose imaging
How to Avoid Mistakes
- Characterize camera noise model (gain, offset, variance map) before acquisition
- Train and evaluate denoising models at the same SNR and microscope settings
- Keep camera offset (dark current) calibration current and subtract properly
- Apply per-pixel gain and offset maps for sCMOS cameras
- Monitor cell health markers (morphology, division rate) to confirm non-toxic dose
Forward-Model Mismatch Cases
- The widefield fallback applies the correct blur kernel but uses a Gaussian noise model, whereas low-dose imaging is dominated by Poisson shot noise with very few photons per pixel
- Denoising algorithms trained on Gaussian noise statistics will underperform on Poisson-dominated low-dose data, producing biased estimates and residual artifacts
How to Correct the Mismatch
- Use the low-dose widefield operator that applies a Poisson-Gaussian noise model: y = Poisson(alpha * PSF ** x) / alpha + N(0, sigma^2)
- Train or select denoising algorithms that explicitly model Poisson statistics (Anscombe transform + BM3D, or Poisson-aware deep networks like Noise2Void)
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Krull et al., 'Noise2Void - Learning Denoising from Single Noisy Images', CVPR 2019
- Weigert et al., 'Content-aware image restoration (CARE)', Nature Methods 15, 1090-1097 (2018)
Canonical Datasets
- BioSR low-SNR subset
- Planaria / Tribolium datasets (Weigert et al.)
Lucky Imaging
Lucky Imaging
Machine Vision / AOI
Machine Vision / AOI
Magnetic Force Microscopy (MFM)
Magnetic Force Microscopy (MFM)
Magnetic Particle Imaging (MPI)
Magnetic Particle Imaging (MPI)
Magnetic Resonance Imaging
MRI forms images by exciting hydrogen nuclei with RF pulses in a strong magnetic field (1.5-7T) and measuring the emitted RF signal with receive coils. Spatial encoding uses gradient fields to map signal frequency and phase to spatial position, acquiring data in k-space (spatial frequency domain). The forward model for parallel imaging is y_c = F_u * S_c * x + n_c where F_u is the undersampled Fourier transform, S_c are coil sensitivity maps, and n_c is complex Gaussian noise. Accelerated MRI undersamples k-space (4-8x) and uses SENSE, GRAPPA, or deep-learning (E2E-VarNet) for reconstruction.
Magnetic Resonance Imaging
Description
MRI forms images by exciting hydrogen nuclei with RF pulses in a strong magnetic field (1.5-7T) and measuring the emitted RF signal with receive coils. Spatial encoding uses gradient fields to map signal frequency and phase to spatial position, acquiring data in k-space (spatial frequency domain). The forward model for parallel imaging is y_c = F_u * S_c * x + n_c where F_u is the undersampled Fourier transform, S_c are coil sensitivity maps, and n_c is complex Gaussian noise. Accelerated MRI undersamples k-space (4-8x) and uses SENSE, GRAPPA, or deep-learning (E2E-VarNet) for reconstruction.
Principle
Magnetic Resonance Imaging measures the precession of hydrogen nuclear spins in a strong magnetic field (1.5-7 T). Radiofrequency pulses tip spins away from equilibrium, and gradient fields spatially encode the MR signal into k-space (spatial frequency domain). The image is obtained by inverse Fourier transform of k-space data. Contrast depends on tissue T1, T2, and proton density via the pulse sequence timing parameters.
How to Build the System
A clinical MRI scanner has a superconducting magnet (1.5 T or 3 T), gradient coils (40-80 mT/m, 200 T/m/s slew rate), RF transmit body coil, and local receive coil arrays (8-128 channels). The patient lies inside the bore on a table. Key calibrations: center frequency, RF transmit calibration (B₁ mapping), shimming (B₀ homogeneity), and gradient eddy current compensation. Use pulse sequences optimized for the clinical question (T1w, T2w, FLAIR, DWI, etc.).
Common Reconstruction Algorithms
- Inverse FFT (standard Cartesian k-space reconstruction)
- GRAPPA (GeneRalized Autocalibrating Partially Parallel Acquisitions)
- SENSE (SENSitivity Encoding) parallel imaging
- Compressed sensing MRI (L1-wavelet + TV regularization)
- Deep-learning MRI reconstruction (fastMRI, variational networks, E2E-VarNet)
Common Mistakes
- Aliasing artifacts from insufficient FOV or acceleration too aggressive
- Motion artifacts (ghosting in phase-encode direction) from patient or physiological motion
- B₀ inhomogeneity causing geometric distortion and signal dropout (especially at 3T+)
- Fat-water chemical shift artifacts at fat-tissue interfaces
- Incorrect coil sensitivity maps causing SENSE/GRAPPA reconstruction artifacts
How to Avoid Mistakes
- Set FOV to cover the anatomy with margin; use saturation bands to suppress aliasing
- Apply motion correction (navigator, PROPELLER, prospective correction) for moving anatomy
- Perform careful shimming; use distortion correction maps for EPI sequences
- Use fat suppression or water-fat separation (Dixon) sequences
- Acquire adequate auto-calibration data for parallel imaging; use robust coil maps
Forward-Model Mismatch Cases
- The widefield fallback produces real-valued spatially blurred output, but MRI acquires complex-valued k-space data via the Fourier transform with undersampling mask — all phase information is lost with the fallback
- The fallback applies spatial-domain convolution, but MRI measurement occurs in Fourier domain (k-space): y = M * F * x — using the fallback means compressed-sensing MRI reconstruction (L1-wavelet, E2E-VarNet) cannot function
How to Correct the Mismatch
- Use the MRI operator that applies the 2D Fourier transform followed by an undersampling mask: y = M * FFT2(x), producing complex-valued k-space measurements
- Reconstruct using parallel imaging (GRAPPA, SENSE) or compressed sensing (L1-wavelet + TV regularization) that operate on the Fourier-domain measurements with known sampling pattern
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Pruessmann et al., 'SENSE: Sensitivity encoding for fast MRI', Magnetic Resonance in Medicine 42, 952-962 (1999)
- Zbontar et al., 'fastMRI: An open dataset and benchmarks for accelerated MRI', arXiv:1811.08839 (2018)
- Sriram et al., 'End-to-End Variational Networks for Accelerated MRI Reconstruction (E2E-VarNet)', MICCAI 2020
Canonical Datasets
- fastMRI (knee: 1594 volumes, brain: 6970 volumes)
- Calgary-Campinas (brain, multi-coil)
- SKM-TEA (Stanford knee MRI)
MALDI Mass Spectrometry Imaging
MALDI Mass Spectrometry Imaging
Mammography
Full-field digital mammography (FFDM) produces high-resolution X-ray projection images of compressed breast tissue for cancer screening. The low-energy X-ray beam (25-32 kVp with W/Rh or Mo/Mo target-filter) maximizes soft tissue contrast. Amorphous selenium flat-panel detectors provide direct conversion with ~50 um pixel pitch. The forward model follows Beer-Lambert with energy-dependent attenuation. Primary challenges include overlapping tissue structures, microcalcification detection, and dense breast tissue masking lesions.
Mammography
Description
Full-field digital mammography (FFDM) produces high-resolution X-ray projection images of compressed breast tissue for cancer screening. The low-energy X-ray beam (25-32 kVp with W/Rh or Mo/Mo target-filter) maximizes soft tissue contrast. Amorphous selenium flat-panel detectors provide direct conversion with ~50 um pixel pitch. The forward model follows Beer-Lambert with energy-dependent attenuation. Primary challenges include overlapping tissue structures, microcalcification detection, and dense breast tissue masking lesions.
Principle
Mammography uses low-energy X-rays (25-35 kVp) with specialized anode/filter combinations (Mo/Mo, Mo/Rh, W/Rh) to optimize contrast between breast tissue types (adipose, glandular, calcifications). Breast compression reduces thickness and scatter, improving contrast and reducing dose. Digital mammography uses flat-panel detectors for direct or indirect X-ray detection.
How to Build the System
A dedicated mammography unit with a compression paddle, specialized X-ray tube (Mo, Rh, or W anode), and high-resolution flat-panel detector (50-100 μm pixel size, amorphous selenium for direct conversion). Automatic optimization of target/filter and kVp based on compressed breast thickness. Regular quality assurance per ACR/MQSA requirements: phantom images, SNR measurements, artifact checks, and AEC calibration.
Common Reconstruction Algorithms
- Contrast-limited adaptive histogram equalization (CLAHE) for display
- Computer-aided detection (CAD) for microcalcification and mass detection
- Digital breast tomosynthesis (DBT) reconstruction (FBP or iterative)
- Deep-learning breast density classification (BI-RADS categories)
- Synthetic 2D mammography from DBT volumes
Common Mistakes
- Insufficient breast compression, increasing dose and reducing contrast
- Positioning errors cutting off breast tissue (especially axillary tail)
- Grid artifacts or grid cutoff from misaligned Bucky grid
- Exposure errors from AEC sensor placed over dense tissue vs. adipose
- Motion blur from long exposure times in thick or dense breasts
How to Avoid Mistakes
- Apply firm, consistent compression; verify thickness readout is reasonable
- Follow standardized positioning protocols (CC, MLO) with technologist training
- Verify grid alignment and use reciprocating grid to eliminate grid lines
- Position AEC sensor appropriately for breast density; adjust manually if needed
- Use shortest possible exposure with adequate mAs; consider large-angle tomosynthesis
Forward-Model Mismatch Cases
- The widefield fallback applies Gaussian blur, but mammography uses low-energy X-ray transmission (25-35 kVp) with tissue-specific attenuation coefficients optimized for fat/glandular tissue contrast — the physics model is fundamentally different
- Mammographic image formation involves compression geometry, scatter grid rejection, anti-scatter grid, and detector-specific MTF — none of these are captured by a simple spatial Gaussian blur
How to Correct the Mismatch
- Use the mammography operator implementing Beer-Lambert transmission at mammographic energies with tissue-specific attenuation: y = I_0 * exp(-mu_tissue * t) for fat, glandular, and calcification components
- Include scatter rejection model, detector quantum efficiency (DQE), and geometric magnification for accurate forward modeling and quantitative breast density estimation
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- VinDr-Mammo, Scientific Data 2023
- Lee et al., 'A curated mammography dataset (CBIS-DDSM)', Scientific Data 4, 170177 (2017)
Canonical Datasets
- VinDr-Mammo (5000 4-view exams)
- CBIS-DDSM (curated DDSM subset)
- INbreast (410 images, Moreira et al.)
MINFLUX Nanoscopy
MINFLUX Nanoscopy
MR Angiography (MRA)
MR Angiography (MRA)
MR Elastography (MRE)
MR Elastography (MRE)
MR Fingerprinting (MRF)
MR Fingerprinting (MRF)
MR Spectroscopy
Magnetic resonance spectroscopy (MRS) measures the concentration of metabolites in a localized tissue volume by exploiting the chemical shift — the slight difference in Larmor frequency caused by the electronic environment of different molecular groups. The free induction decay (FID) or spin echo signal is Fourier-transformed to a spectrum where each metabolite produces characteristic peaks (e.g. NAA at 2.01 ppm, Cr at 3.03 ppm). Quantification involves fitting the spectrum to a linear combination of basis spectra (LCModel, OSPREY). Challenges include low SNR, spectral overlap, water/lipid suppression, and B0 inhomogeneity causing linewidth broadening.
MR Spectroscopy
Description
Magnetic resonance spectroscopy (MRS) measures the concentration of metabolites in a localized tissue volume by exploiting the chemical shift — the slight difference in Larmor frequency caused by the electronic environment of different molecular groups. The free induction decay (FID) or spin echo signal is Fourier-transformed to a spectrum where each metabolite produces characteristic peaks (e.g. NAA at 2.01 ppm, Cr at 3.03 ppm). Quantification involves fitting the spectrum to a linear combination of basis spectra (LCModel, OSPREY). Challenges include low SNR, spectral overlap, water/lipid suppression, and B0 inhomogeneity causing linewidth broadening.
Principle
MR Spectroscopy measures the chemical shift spectrum of nuclear spins (usually ¹H) from a localized volume in the body, providing concentrations of metabolites such as NAA, creatine, choline, lactate, myo-inositol, and glutamate/glutamine. Chemical shift differences (in ppm) arise from the varying electronic shielding of nuclei in different molecular environments.
How to Build the System
Use PRESS or STEAM single-voxel localization on a 1.5T or 3T scanner. Voxel sizes are typically 2×2×2 cm³ for brain. Suppress the dominant water signal (CHESS or VAPOR water suppression). Acquire 64-256 averages (NEX) for adequate SNR. Shimming is critical: water linewidth should be <12 Hz (3T) for the voxel. Multi-voxel CSI (Chemical Shift Imaging) maps metabolite distributions but requires longer acquisition and careful lipid suppression.
Common Reconstruction Algorithms
- LCModel (frequency-domain linear combination fitting)
- TARQUIN (open-source time-domain fitting)
- jMRUI (time-domain quantification with AMARES/QUEST)
- HSVD (Hankel SVD) for water removal and baseline correction
- Deep-learning spectral quantification (DeepSpectra, convolutional fitting)
Common Mistakes
- Poor shimming producing broad linewidths that overlap metabolite peaks
- Voxel placed partly outside the brain, contaminating spectrum with lipid signal
- Insufficient water suppression saturating the spectrum baseline
- Too few averages, producing noisy spectra with unreliable metabolite estimates
- Ignoring macromolecular baseline contributions in fitting
How to Avoid Mistakes
- Iteratively shim the voxel to achieve <12 Hz water linewidth (3T) before acquisition
- Place the voxel with margin from skull and subcutaneous fat; use outer-volume suppression
- Optimize water suppression parameters; acquire separate water reference for quantification
- Acquire sufficient averages: 128-256 for metabolites at low concentration (e.g., GABA)
- Include macromolecular basis set or measured baseline in the fitting model
Forward-Model Mismatch Cases
- The widefield fallback produces a spatial image, but MR Spectroscopy acquires frequency-domain spectra encoding chemical composition — metabolite peaks (NAA, choline, creatine, lactate) at specific ppm values are entirely absent
- MRS data is a 1D free induction decay (FID) or spectrum per voxel, not a 2D spatial image — the widefield blur destroys the spectral dimension that encodes metabolite concentrations
How to Correct the Mismatch
- Use the MRS operator that models the free induction decay: y(t) = sum_k(a_k * exp(i*2pi*f_k*t) * exp(-t/T2_k)) for each metabolite k, then FFT to produce the frequency spectrum
- Quantify metabolite concentrations by fitting the spectrum (LCModel, TARQUIN) or using deep-learning spectral quantification with the correctly modeled spectral forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Provencher, 'Estimation of metabolite concentrations from localized in vivo proton NMR spectra (LCModel)', MRM 30, 672-679 (1993)
- Wilson et al., 'Methodological consensus on clinical proton MRS of the brain (MRSinMRS)', NMR in Biomedicine 34, e4484 (2021)
Canonical Datasets
- ISMRM MRS fitting challenge datasets
- Big GABA multi-site MRS data
Multispectral Satellite Imaging
Multispectral Satellite Imaging
Muon Tomography
Muon tomography uses naturally occurring cosmic-ray muons (mean energy ~4 GeV, flux ~1/cm2/min at sea level) to image the interior of large, dense objects by measuring the scattering angle of each muon as it traverses the object. High-Z materials (uranium, plutonium, lead) cause large-angle scattering that is readily distinguished from low-Z materials. Position-sensitive detectors (drift tubes, RPCs) above and below the object track each muon's trajectory. The scattering density is proportional to Z^2/A. Reconstruction uses the point-of-closest-approach (POCA) algorithm or maximum-likelihood/expectation-maximization (ML-EM). Long exposure times (minutes to hours) are needed due to the low natural muon flux. Applications include nuclear material detection and volcano interior imaging (muography).
Muon Tomography
Description
Muon tomography uses naturally occurring cosmic-ray muons (mean energy ~4 GeV, flux ~1/cm2/min at sea level) to image the interior of large, dense objects by measuring the scattering angle of each muon as it traverses the object. High-Z materials (uranium, plutonium, lead) cause large-angle scattering that is readily distinguished from low-Z materials. Position-sensitive detectors (drift tubes, RPCs) above and below the object track each muon's trajectory. The scattering density is proportional to Z^2/A. Reconstruction uses the point-of-closest-approach (POCA) algorithm or maximum-likelihood/expectation-maximization (ML-EM). Long exposure times (minutes to hours) are needed due to the low natural muon flux. Applications include nuclear material detection and volcano interior imaging (muography).
Principle
Muon tomography uses naturally occurring cosmic-ray muons to image the internal density structure of large objects (buildings, volcanoes, cargo containers). Muons undergo multiple Coulomb scattering, with the scattering angle proportional to the areal density and atomic number of the traversed material. By measuring the incoming and outgoing muon trajectories, the density distribution inside the object can be tomographically reconstructed.
How to Build the System
Place tracking detectors (drift tubes, scintillator strips, resistive plate chambers, or GEM detectors) above and below (or around) the object to be imaged. Each detector station measures the position and angle of each cosmic-ray muon before and after it traverses the object. Typical cosmic-ray muon flux is ~10,000 muons/m²/min at sea level. Exposure times range from minutes (for dense nuclear materials) to months (for geological structures like volcanoes).
Common Reconstruction Algorithms
- Point of Closest Approach (POCA) voxel reconstruction
- Maximum Likelihood / Expectation Maximization (ML/EM) scattering tomography
- Angle Statistics Reconstruction (ASR) for material discrimination
- Binned scattering density reconstruction
- Deep-learning muon tomography for faster convergence with fewer muons
Common Mistakes
- Insufficient muon statistics for the desired spatial resolution (need long exposure)
- Detector alignment errors causing incorrect scattering angle measurements
- Not accounting for muon momentum spectrum (affects scattering angle distribution)
- Background tracks (electrons, low-momentum muons) contaminating the data
- POCA algorithm limitations in complex, non-point-like geometries
How to Avoid Mistakes
- Calculate required exposure time based on object size, density, and desired resolution
- Align detectors carefully using straight-through cosmic ray tracks as calibration
- Use momentum measurement (from curvature in a magnetic field) or momentum-dependent MCS model
- Apply track quality cuts (chi-squared, minimum number of detector hits) to reject background
- Use iterative reconstruction (ML/EM) rather than POCA for quantitative density imaging
Forward-Model Mismatch Cases
- The widefield fallback applies Gaussian blur, but muon tomography measures the scattering angle of cosmic-ray muons passing through the object — the scattering angle (Highland formula) encodes radiation length and density, not image blur
- Muon tomography uses natural cosmic-ray flux (~10,000 muons/m^2/min) with tracking detectors above and below the object — the widefield optical model has no connection to high-energy particle tracking or multiple Coulomb scattering physics
How to Correct the Mismatch
- Use the muon tomography operator that models multiple Coulomb scattering: incoming and outgoing muon tracks are measured, and the scattering angle distribution at each voxel encodes the local radiation length (related to material Z and density)
- Reconstruct using POCA (Point of Closest Approach) for quick imaging, or ML/EM iterative methods for quantitative density/Z mapping, using the correct scattering probability forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Borozdin et al., 'Radiographic imaging with cosmic-ray muons', Nature 422, 277 (2003)
- Tanaka et al., 'Imaging the conduit size of the dome with cosmic-ray muons: The structure beneath Showa-Shinzan Lava Dome', Geophysical Research Letters 34, L22311 (2007)
Canonical Datasets
- Los Alamos muon tomography simulation benchmarks
- IAEA muon imaging reference data
Near-field Scanning Optical Microscopy (NSOM)
Near-field Scanning Optical Microscopy (NSOM)
Neural Radiance Fields (NeRF)
Neural radiance fields (NeRF) represent a 3D scene as a continuous volumetric function F(x,y,z,theta,phi) -> (RGB, sigma) parameterized by a multi-layer perceptron that maps 5D coordinates (position + viewing direction) to color and volume density. Novel views are synthesized by marching camera rays through the volume and integrating color weighted by transmittance using quadrature. Training optimizes the MLP weights to minimize photometric loss between rendered and observed images. Primary challenges include slow training/rendering, view-dependent effects, and the need for accurate camera poses (from COLMAP).
Neural Radiance Fields (NeRF)
Description
Neural radiance fields (NeRF) represent a 3D scene as a continuous volumetric function F(x,y,z,theta,phi) -> (RGB, sigma) parameterized by a multi-layer perceptron that maps 5D coordinates (position + viewing direction) to color and volume density. Novel views are synthesized by marching camera rays through the volume and integrating color weighted by transmittance using quadrature. Training optimizes the MLP weights to minimize photometric loss between rendered and observed images. Primary challenges include slow training/rendering, view-dependent effects, and the need for accurate camera poses (from COLMAP).
Principle
Neural Radiance Fields (NeRF) represent a 3-D scene as a continuous volumetric function F(x,y,z,θ,φ) → (RGB, σ) parameterized by a multi-layer perceptron (MLP). The network maps 3-D position and viewing direction to color and volume density. Novel views are synthesized by differentiable volume rendering along camera rays, and the network is trained by minimizing photometric loss against a set of posed 2-D images.
How to Build the System
Capture 50-200 images of a scene from diverse viewpoints using a calibrated camera (known intrinsics) or estimate camera poses with COLMAP structure-from-motion. Images should cover the scene uniformly. Train a NeRF MLP (typically 8 layers, 256 units, with positional encoding of input coordinates) on a GPU (≥12 GB VRAM). Training takes 12-48 hours on a single V100. Use mip-NeRF, Instant-NGP, or TensoRF for faster convergence.
Common Reconstruction Algorithms
- Vanilla NeRF (MLP + positional encoding)
- Instant-NGP (multi-resolution hash encoding, minutes training)
- mip-NeRF (anti-aliased cone tracing)
- Nerfacto (nerfstudio default combining multiple improvements)
- TensoRF (tensor factorization for compact radiance fields)
Common Mistakes
- Insufficient camera pose accuracy (SfM failure) causing blurry results
- Too few input views or views clustered in a narrow angular range
- Training only at one scale without mip-NeRF, causing aliasing at novel distances
- Floater artifacts in empty space from insufficient regularization
- Very slow training and rendering with vanilla NeRF (hours to train, seconds per frame)
How to Avoid Mistakes
- Verify COLMAP pose estimation quality; add more images if registration fails
- Capture views uniformly around the scene; include close-up and distant views
- Use mip-NeRF or multi-scale training for scale consistency
- Add distortion loss or density regularization to eliminate floater artifacts
- Use Instant-NGP or 3D Gaussian Splatting for real-time rendering requirements
Forward-Model Mismatch Cases
- The widefield fallback processes a single 2D (64,64) image, but NeRF renders multiple views of a 3D scene from a volumetric radiance field — output shape (n_views, H, W) represents images from different camera poses
- NeRF is fundamentally nonlinear (volume rendering integral: C(r) = integral of T(t)*sigma(t)*c(t) dt along each ray) — the widefield linear blur cannot model view-dependent appearance, occlusion, or 3D geometry
How to Correct the Mismatch
- Use the NeRF operator that performs differentiable volume rendering: for each pixel, cast a ray through the volumetric density/color field and integrate transmittance-weighted radiance
- Optimize the 3D radiance field (MLP or voxel grid) to minimize photometric loss across all training views using the correct volume rendering equation as the forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Mildenhall et al., 'NeRF: Representing scenes as neural radiance fields for view synthesis', ECCV 2020
- Muller et al., 'Instant Neural Graphics Primitives (Instant-NGP)', SIGGRAPH 2022
Canonical Datasets
- NeRF Blender Synthetic (8 scenes)
- LLFF (8 forward-facing scenes)
- Mip-NeRF 360 (9 unbounded scenes)
Neutron Diffraction
Neutron Diffraction
Neutron Radiography / Tomography
Neutron imaging exploits the unique interaction of thermal neutrons with matter — neutrons are attenuated strongly by light elements (hydrogen, lithium, boron) while penetrating heavy elements (lead, iron) that are opaque to X-rays. The forward model follows Beer-Lambert: I = I_0 * exp(-integral(Sigma(s) ds)) where Sigma is the macroscopic cross-section. Tomographic reconstruction from multiple projection angles uses FBP or iterative methods. Neutron sources include research reactors and spallation sources. The lower flux compared to X-rays requires longer exposures (seconds) and results in lower spatial resolution (50-100 um).
Neutron Radiography / Tomography
Description
Neutron imaging exploits the unique interaction of thermal neutrons with matter — neutrons are attenuated strongly by light elements (hydrogen, lithium, boron) while penetrating heavy elements (lead, iron) that are opaque to X-rays. The forward model follows Beer-Lambert: I = I_0 * exp(-integral(Sigma(s) ds)) where Sigma is the macroscopic cross-section. Tomographic reconstruction from multiple projection angles uses FBP or iterative methods. Neutron sources include research reactors and spallation sources. The lower flux compared to X-rays requires longer exposures (seconds) and results in lower spatial resolution (50-100 um).
Principle
Neutron radiography and tomography image the transmission of a thermal or cold neutron beam through a sample. Neutrons interact with nuclei (not electrons), providing complementary contrast to X-rays: hydrogen-rich materials (water, polymers, organics) attenuate neutrons strongly, while metals like aluminum and lead are relatively transparent. Tomographic reconstruction from multiple projection angles yields 3-D maps of neutron attenuation.
How to Build the System
Access a research reactor or spallation neutron source with an imaging beamline (e.g., ICON at PSI, IMAT at ISIS, NIST BT-2). A collimated neutron beam (thermal or cold, 1-10 Å) passes through the sample, and a scintillator-camera system (⁶LiF/ZnS screen + sCMOS camera) records the transmitted intensity. Rotate the sample through 180° or 360° for tomography. Spatial resolution is typically 20-100 μm, limited by beam divergence and scintillator thickness.
Common Reconstruction Algorithms
- Filtered back-projection (FBP) adapted for neutron tomography
- Iterative reconstruction (SIRT, CGLS) for limited-angle or noisy data
- Beam hardening correction for polychromatic neutron spectra
- Scattering correction (point-scattered function approach)
- Neutron phase-contrast tomography (grating interferometry)
Common Mistakes
- Scattering from hydrogen-rich samples producing artifacts (halo around sample)
- Beam hardening (spectral hardening) not corrected for polychromatic beams
- Activation of sample materials, creating radiation safety issues post-experiment
- Gamma contamination in the beam degrading image quality
- Insufficient exposure time per projection, yielding noisy tomograms
How to Avoid Mistakes
- Apply scattering correction algorithms; use thin or diluted hydrogen-rich samples
- Correct beam hardening with polynomial methods or by using a velocity selector (monochromatic)
- Check sample activation potential before irradiation; use short-lived isotope-free materials
- Use gamma-blind detectors (⁶Li glass) or filters to reject gamma contamination
- Optimize exposure per projection for adequate SNR; total scan time often 2-8 hours
Forward-Model Mismatch Cases
- The widefield fallback applies optical Gaussian blur, but neutron tomography measures neutron transmission (I = I_0 * exp(-sigma_t * n * t)) — neutrons interact with nuclei, not electron clouds, giving completely different contrast (hydrogen-rich materials are opaque to neutrons but transparent to X-rays)
- Neutron attenuation depends on nuclear cross-sections that vary dramatically between isotopes (H, Li, B are strong absorbers) — the widefield model has no nuclear physics and cannot distinguish materials by their neutron interaction properties
How to Correct the Mismatch
- Use the neutron tomography operator implementing Beer-Lambert neutron transmission: y(theta,s) = I_0 * exp(-integral(Sigma_t(x,y) dl)) where Sigma_t is the macroscopic total cross-section
- Reconstruct using FBP or iterative methods (same algorithms as X-ray CT) but with neutron-specific attenuation coefficients — neutron imaging reveals hydrogen/water content, lithium batteries, and metallurgical features invisible to X-rays
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Kardjilov et al., 'Advances in neutron imaging', Materials Today 21, 652-672 (2018)
- IAEA, 'Neutron Imaging: A Non-Destructive Tool for Materials Testing', IAEA-TECDOC-1604 (2008)
Canonical Datasets
- PSI ICON neutron imaging benchmark data
- NIST neutron radiography reference images
Ocean Acoustic Tomography
Ocean Acoustic Tomography
Ocean Color Remote Sensing
Ocean Color Remote Sensing
OCT Angiography
OCT angiography extends standard OCT by acquiring repeated B-scans at the same location and computing the decorrelation of the complex OCT signal between successive scans. Moving red blood cells cause temporal fluctuations that differ from static tissue, enabling label-free visualization of retinal vasculature. The contrast mechanism uses amplitude decorrelation (SSADA), phase variance, or complex-signal algorithms. Key limitations include motion artifacts, projection artifacts from superficial vessels, and limited field of view.
OCT Angiography
Description
OCT angiography extends standard OCT by acquiring repeated B-scans at the same location and computing the decorrelation of the complex OCT signal between successive scans. Moving red blood cells cause temporal fluctuations that differ from static tissue, enabling label-free visualization of retinal vasculature. The contrast mechanism uses amplitude decorrelation (SSADA), phase variance, or complex-signal algorithms. Key limitations include motion artifacts, projection artifacts from superficial vessels, and limited field of view.
Principle
OCT Angiography detects blood flow non-invasively by comparing repeated OCT B-scans at the same location. Moving red blood cells cause temporal fluctuations in the OCT signal (amplitude and/or phase), while static tissue remains constant. Decorrelation, variance, or differential analysis between repeated scans produces a motion-contrast image revealing the vasculature without the need for injectable contrast agents.
How to Build the System
Use a high-speed OCT system (≥70 kHz A-scan rate, swept-source preferred) capable of repeated B-scans at the same location. Acquire 2-4 repeated B-scans at each position with inter-scan time of 3-10 ms. An eye-tracking system is essential for ophthalmic OCTA to correct microsaccades. Process with split-spectrum amplitude-decorrelation (SSADA), optical microangiography (OMAG), or phase-variance algorithms.
Common Reconstruction Algorithms
- SSADA (Split-Spectrum Amplitude-Decorrelation Angiography)
- OMAG (Optical Micro-Angiography, complex signal differential)
- Phase-variance OCTA
- Deep-learning OCTA denoising and vessel segmentation
- Projection artifact removal algorithms
Common Mistakes
- Bulk tissue motion producing decorrelation artifacts (false flow signals)
- Projection artifacts where superficial vessel shadows appear in deeper layers
- Shadow artifacts beneath large vessels causing false flow voids
- Insufficient inter-scan interval for detecting slow capillary flow
- Motion artifacts from blinks or microsaccades corrupting OCTA volumes
How to Avoid Mistakes
- Apply bulk motion correction (axial and lateral registration) before decorrelation analysis
- Use projection artifact removal algorithms (slab subtraction or OMAG-based)
- Increase number of repeated B-scans to improve SNR and reduce shadow impact
- Optimize inter-scan time: shorter for fast flow, longer for slow capillary flow
- Use active eye tracking and discard frames with large motion; average multiple volumes
Forward-Model Mismatch Cases
- The widefield fallback applies static spatial blur, but OCTA detects blood flow by comparing repeated OCT B-scans — the temporal decorrelation between scans caused by moving red blood cells is not modeled
- OCTA is fundamentally a motion-contrast technique (flow signal = decorrelation or variance between repeated measurements) — the widefield static model has no temporal dimension and cannot detect or distinguish flowing from static tissue
How to Correct the Mismatch
- Use the OCTA operator that models repeated OCT measurements at the same location: static tissue produces correlated signals while flowing blood produces decorrelated signals between repeated scans
- Extract flow maps using SSADA (split-spectrum amplitude decorrelation) or OMAG (optical microangiography) that require multiple temporally separated OCT measurements as input
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Jia et al., 'Split-spectrum amplitude-decorrelation angiography (SSADA)', Opt. Express 20, 4710 (2012)
- Spaide et al., 'OCT Angiography', Prog. Retin. Eye Res. 64, 1 (2018)
Canonical Datasets
- OCTA-500 (Li et al., Scientific Data 2024)
- ROSE retinal OCTA vessel segmentation
Optical Coherence Tomography
OCT is a low-coherence interferometric imaging technique that measures depth-resolved backscattering profiles (A-scans) by interfering sample-arm reflections with a reference mirror. In spectral-domain OCT, the interference spectrum is recorded by a spectrometer and the axial profile is obtained via Fourier transform. Axial resolution is determined by the source bandwidth (typically 3-7 um in tissue) and imaging depth by spectrometer resolution. Dominant artifacts include speckle noise, motion artifacts, and sensitivity roll-off with depth.
Optical Coherence Tomography
Description
OCT is a low-coherence interferometric imaging technique that measures depth-resolved backscattering profiles (A-scans) by interfering sample-arm reflections with a reference mirror. In spectral-domain OCT, the interference spectrum is recorded by a spectrometer and the axial profile is obtained via Fourier transform. Axial resolution is determined by the source bandwidth (typically 3-7 um in tissue) and imaging depth by spectrometer resolution. Dominant artifacts include speckle noise, motion artifacts, and sensitivity roll-off with depth.
Principle
Optical Coherence Tomography uses low-coherence interferometry to produce cross-sectional images of tissue microstructure. A broadband light source (superluminescent diode, ~840 nm or ~1310 nm) is split between sample and reference arms. Interference occurs only when the path lengths match within the coherence length (~5-10 μm), providing axial resolution. Spectral-domain OCT records the spectral interferogram and uses FFT for fast depth-resolved imaging.
How to Build the System
Build or acquire a spectral-domain OCT system: broadband SLD source (center 840 nm, 50 nm bandwidth for retinal; 1310 nm for dermal/cardiac), fiber-based Michelson interferometer, galvo scanner for lateral scanning, and a spectrometer with line camera (2048-4096 pixels) for spectral detection. Calibrate wavelength-to-wavenumber mapping, dispersion compensation, and reference arm delay. For swept-source OCT, use a frequency-swept laser (100-400 kHz sweep rate) and balanced detector.
Common Reconstruction Algorithms
- FFT-based spectral-domain OCT reconstruction (spectral interferogram → A-scan)
- Dispersion compensation (numerical or hardware)
- Speckle reduction (spatial/angular compounding, or deep-learning)
- Segmentation of retinal layers (graph-based, U-Net, or transformer models)
- OCT Angiography (OCTA) via decorrelation or phase-variance of repeated B-scans
Common Mistakes
- Dispersion mismatch between sample and reference arms degrading axial resolution
- Mirror image artifact from complex conjugate ambiguity in SD-OCT
- Sensitivity roll-off at deeper imaging depths not compensated
- Motion artifacts in 3-D OCT volumes (eye motion for ophthalmic OCT)
- Incorrect refractive index assumption for depth scale calibration
How to Avoid Mistakes
- Match fiber lengths and add numerical dispersion compensation in reconstruction
- Place the zero-delay near the sample surface; use full-range OCT if needed
- Use swept-source OCT for reduced roll-off; optimize spectrometer for uniform sensitivity
- Apply eye-tracking or motion-correction algorithms; average repeated B-scans
- Calibrate depth scale with a known-thickness reference standard
Forward-Model Mismatch Cases
- The widefield fallback applies spatial blur, but OCT acquires spectral interferograms that encode depth via low-coherence interferometry — the interference fringe pattern bears no resemblance to a blurred image
- OCT depth resolution comes from the broadband source coherence length (~5-10 um), not from spatial PSF — the widefield operator cannot model the axial sectioning, dispersion, or spectral-to-depth FFT relationship
How to Correct the Mismatch
- Use the OCT operator that models spectral-domain interferometry: y(k) = |E_ref + E_sample(k)|^2, where depth information is encoded in the spectral fringe frequency
- Reconstruct A-scans via FFT of the spectral interferogram after dispersion compensation and k-linearization; B-scans are formed by lateral scanning
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Huang et al., 'Optical coherence tomography', Science 254, 1178 (1991)
- de Boer et al., 'Twenty-five years of OCT', Biomed. Opt. Express 8, 3248 (2017)
Canonical Datasets
- Duke SD-OCT DME dataset (Chiu et al.)
- RETOUCH Challenge (retinal OCT)
- OCTA-500 (Li et al., Scientific Data 2024)
Optical Diffraction Tomography (ODT)
Optical Diffraction Tomography (ODT)
PALM/STORM Single-Molecule Localization
Photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve nanoscale resolution by stochastically activating sparse subsets of fluorescent molecules per frame, localizing each with sub-diffraction precision (proportional to sigma/sqrt(N) where N is detected photons), and accumulating localizations over thousands of frames. Typical localization precision is 10-30 nm. Primary challenges include overlapping emitters at high density, sample drift, and blinking statistics. Reconstruction uses Gaussian fitting (ThunderSTORM) or deep learning (DECODE).
PALM/STORM Single-Molecule Localization
Description
Photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve nanoscale resolution by stochastically activating sparse subsets of fluorescent molecules per frame, localizing each with sub-diffraction precision (proportional to sigma/sqrt(N) where N is detected photons), and accumulating localizations over thousands of frames. Typical localization precision is 10-30 nm. Primary challenges include overlapping emitters at high density, sample drift, and blinking statistics. Reconstruction uses Gaussian fitting (ThunderSTORM) or deep learning (DECODE).
Principle
Single-Molecule Localization Microscopy (PALM/STORM) achieves ~20 nm resolution by stochastically switching individual fluorophores between bright and dark states. In each frame, only a sparse subset of molecules emit, allowing their positions to be localized with sub-pixel precision by fitting 2-D Gaussians. Thousands of frames are accumulated and all localizations are plotted to form a super-resolution image.
How to Build the System
Use a TIRF microscope (100x 1.49 NA oil objective) with powerful laser excitation (200-500 mW at the sample, 647 nm for Alexa647 STORM or 561 nm for mEos PALM). TIRF geometry reduces background. An oxygen-scavenging buffer with thiol (MEA/BME) is critical for Alexa647 blinking. Use an EMCCD (Andor iXon 897) or fast sCMOS camera at 30-100 Hz frame rate. Acquire 10,000-50,000 frames.
Common Reconstruction Algorithms
- ThunderSTORM (ImageJ plugin, MLE/LSQ Gaussian fitting)
- SMLM ZOLA-3D (deep-learning 3D localization)
- DAOSTORM (multi-emitter fitting for high density)
- Drift correction (fiducial-based or cross-correlation)
- HAWK / ANNA-PALM (deep-learning for accelerated SMLM)
Common Mistakes
- Density of active emitters too high, causing overlapping PSFs and localization errors
- Insufficient photon count per localization, yielding poor precision (>30 nm)
- Sample drift during long acquisitions not corrected
- Poor blinking statistics (incomplete on-off switching) from wrong buffer conditions
- Mistaking fixed-pattern noise or autofluorescence for single molecules
How to Avoid Mistakes
- Tune activation laser to achieve sparse single-molecule density per frame
- Optimize buffer (pH, thiol concentration, oxygen scavenger) for bright blinks (>1000 photons)
- Include fiducial markers (gold beads or TetraSpeck) and apply drift correction
- Prepare fresh imaging buffer immediately before acquisition; degas thoroughly
- Apply quality filters (photon threshold, localization precision, PSF shape) in analysis
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred intensity image, but PALM/STORM generates sparse single-molecule localizations — the correct forward model produces a list of (x,y,photons) events, not a convolved image
- Using a continuous PSF blur instead of the discrete point-emitter model (y = sum_i(n_i * PSF(r - r_i) + background)) means single-molecule fitting algorithms will receive incorrect input and localization precision estimates will be meaningless
How to Correct the Mismatch
- Use the PALM/STORM operator that simulates stochastic single-molecule activation: sparse emitters with Poisson photon counts, individually convolved with the PSF, on a per-frame basis
- Reconstruct using single-molecule localization (Gaussian fitting, MLE) on the correct sparse-emitter frames; the forward model must match the blinking kinetics and photon statistics of the fluorophore
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Betzig et al., 'Imaging intracellular fluorescent proteins at nanometer resolution', Science 313, 1642-1645 (2006)
- Rust et al., 'Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)', Nature Methods 3, 793-796 (2006)
- Speiser et al., 'Deep learning enables fast and dense single-molecule localization (DECODE)', Nature Methods 18, 1082-1090 (2021)
Canonical Datasets
- SMLM Challenge 2016 (Sage et al., Nature Methods 2019)
- ThunderSTORM tutorial datasets
Panorama Multi-Focus Fusion
Multi-focus panoramic fusion combines images captured at different focal planes and/or different spatial positions to produce an all-in-focus image with extended depth of field and wide field of view. Focus stacking selects the sharpest regions from each focal plane using local contrast measures, then blends them via Laplacian pyramid fusion or wavelet-based methods. Panoramic stitching aligns overlapping images using feature matching (SIFT/SURF) and blends seams. Primary challenges include parallax at scene edges and focus measure ambiguity in low-texture regions.
Panorama Multi-Focus Fusion
Description
Multi-focus panoramic fusion combines images captured at different focal planes and/or different spatial positions to produce an all-in-focus image with extended depth of field and wide field of view. Focus stacking selects the sharpest regions from each focal plane using local contrast measures, then blends them via Laplacian pyramid fusion or wavelet-based methods. Panoramic stitching aligns overlapping images using feature matching (SIFT/SURF) and blends seams. Primary challenges include parallax at scene edges and focus measure ambiguity in low-texture regions.
Principle
Panoramic multi-focus fusion captures multiple images of the same wide scene at different focal distances and combines them to produce a single all-in-focus panorama with extended depth of field. Image stitching aligns overlapping frames using feature matching and homography estimation, while focus fusion selects the sharpest pixels from each focal plane.
How to Build the System
Mount a camera on a motorized panoramic head (nodal point rotation). For each pan/tilt position, capture a focus stack (3-10 images at different focus distances). Use a medium-aperture setting (f/5.6-f/8) for each frame. Stitch overlapping views (30 % horizontal overlap) and fuse focus stacks per view tile. Calibrate the panoramic head to rotate around the lens entrance pupil to minimize parallax.
Common Reconstruction Algorithms
- Laplacian pyramid focus fusion (weighted blending by local contrast)
- SIFT/SURF feature matching + RANSAC homography estimation
- Multi-band blending (Burt-Adelson) for seamless stitching
- Exposure fusion (Mertens et al.) for HDR panoramas
- Deep-learning focus stacking (DFDF, DeepFocus)
Common Mistakes
- Parallax errors from rotation not centered on the lens entrance pupil
- Ghosting from moving objects between sequential captures
- Color inconsistency between overlapping tiles due to auto-exposure variation
- Incomplete focus coverage leaving blurry regions in the final panorama
- Stitching artifacts at seam lines visible in the final output
How to Avoid Mistakes
- Use a calibrated panoramic head; verify no-parallax point for the specific lens
- Mask out or blend moving objects; capture quickly or use simultaneous multi-camera rigs
- Lock exposure, white balance, and focus (manual mode) across all tiles
- Plan focus distances to cover the entire depth range of the scene
- Use multi-band blending and choose seam lines in textureless regions
Forward-Model Mismatch Cases
- The widefield fallback applies Gaussian blur to a single image, but panoramic imaging involves geometric projection (cylindrical, spherical, or equirectangular) of the scene onto a wide field of view — the projection geometry is absent
- Panorama multi-focus fusion requires modeling focus variation across the wide FOV and stitching multiple exposures — the widefield single-frame model cannot capture the spatially varying focus or overlap regions
How to Correct the Mismatch
- Use the panorama operator that models the geometric projection (cylindrical or spherical warping) and focus-dependent blur across the wide field of view
- Reconstruct using image stitching with homography estimation, exposure fusion, and spatially varying deblurring that account for the correct projection geometry
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Burt & Adelson, 'The Laplacian Pyramid as a Compact Image Code', IEEE Trans. Commun. 31, 532-540 (1983)
Canonical Datasets
- Lytro multi-focus test set
Particle Calorimetry
Particle Calorimetry
Passive Microwave Radiometry
Passive Microwave Radiometry
PET/CT
PET/CT
PET/MR
PET/MR
Phase Contrast Microscopy
Phase Contrast Microscopy
Photoacoustic Imaging
Photoacoustic imaging (PAI) is a hybrid modality that combines optical absorption contrast with ultrasonic detection. Short laser pulses (nanoseconds) are absorbed by tissue chromophores (hemoglobin, melanin), causing thermoelastic expansion that generates broadband ultrasound waves detected by transducer arrays. The forward model involves the photoacoustic wave equation: the initial pressure p_0(r) is proportional to the absorbed optical energy. Reconstruction inverts the acoustic propagation using delay-and-sum (DAS) or model-based algorithms.
Photoacoustic Imaging
Description
Photoacoustic imaging (PAI) is a hybrid modality that combines optical absorption contrast with ultrasonic detection. Short laser pulses (nanoseconds) are absorbed by tissue chromophores (hemoglobin, melanin), causing thermoelastic expansion that generates broadband ultrasound waves detected by transducer arrays. The forward model involves the photoacoustic wave equation: the initial pressure p_0(r) is proportional to the absorbed optical energy. Reconstruction inverts the acoustic propagation using delay-and-sum (DAS) or model-based algorithms.
Principle
Photoacoustic imaging converts absorbed pulsed laser light into ultrasound via thermoelastic expansion. Short laser pulses (<10 ns) are absorbed by tissue chromophores (hemoglobin, melanin), causing rapid thermal expansion that generates broadband acoustic waves. These waves are detected by ultrasound transducers and reconstructed to form images reflecting optical absorption contrast at ultrasonic spatial resolution.
How to Build the System
Combine a tunable pulsed laser (Nd:YAG pumped OPO, 680-1100 nm, 5-20 ns pulses, 10-20 Hz) with an ultrasound transducer array (linear or curved, 5-40 MHz). Deliver light via fiber bundle to the tissue surface adjacent to the transducer. Use a multi-channel DAQ (12-14 bit, 40-100 MS/s) to record acoustic signals. For tomographic PAT, surround the sample with a ring or spherical array of transducers.
Common Reconstruction Algorithms
- Universal back-projection for photoacoustic tomography
- Time-reversal reconstruction
- Model-based iterative reconstruction with acoustic heterogeneity
- Spectral unmixing for multi-wavelength functional PA imaging
- Deep-learning PA image reconstruction (U-Net, pixel-wise inversion)
Common Mistakes
- Insufficient laser fluence reaching target depth due to tissue scattering
- Acoustic heterogeneity (speed-of-sound variations) causing image distortion
- Limited-view artifacts from incomplete transducer coverage around the sample
- Coupling medium mismatch between transducer and tissue
- Laser safety violations from excessive skin surface fluence (>20 mJ/cm²)
How to Avoid Mistakes
- Use NIR wavelengths (700-900 nm optical window) for deeper penetration
- Use speed-of-sound correction maps or joint reconstruction for heterogeneous media
- Maximize angular coverage of transducer array; use virtual-detector techniques
- Use appropriate acoustic coupling gel or water bath between transducer and tissue
- Monitor laser fluence at the tissue surface; comply with ANSI Z136.1 MPE limits
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred (64,64) image, but photoacoustic imaging acquires time-resolved pressure signals at transducer elements — output shape (n_time, n_detectors) represents acoustic wave arrivals, not an image
- Photoacoustic signal generation involves optical absorption → thermoelastic expansion → acoustic wave propagation — the widefield blur has no connection to the optical-acoustic conversion physics
How to Correct the Mismatch
- Use the photoacoustic operator that models the forward problem: laser absorption creates initial pressure p_0(r) = Gamma * mu_a * Phi(r), then acoustic waves propagate to transducer elements
- Reconstruct using time-reversal, back-projection, or model-based iterative methods that invert the acoustic wave equation from measured pressure time series to initial pressure distribution
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Wang & Yao, 'Photoacoustic microscopy and computed tomography', Nature Methods 13, 627-638 (2016)
- Manwar et al., 'OADAT: Optoacoustic dataset', J. Biophotonics 2024
Canonical Datasets
- OADAT (optoacoustic benchmark)
- IPASC consensus datasets
Photometric Stereo
Photometric Stereo
Polarimetric SAR (PolSAR)
Polarimetric SAR (PolSAR)
Polarization Microscopy
Polarization microscopy measures anisotropic optical properties by analysing the polarisation state of light through the sample. In Mueller matrix imaging, the sample is illuminated with known polarisation states and the output is analysed, yielding a 4x4 Mueller matrix at each pixel encoding birefringence, optical activity, and depolarisation. The LC-PolScope uses liquid crystal retarders for rapid modulation. Reconstruction involves solving for Mueller elements and Lu-Chipman decomposition into physically meaningful parameters.
Polarization Microscopy
Description
Polarization microscopy measures anisotropic optical properties by analysing the polarisation state of light through the sample. In Mueller matrix imaging, the sample is illuminated with known polarisation states and the output is analysed, yielding a 4x4 Mueller matrix at each pixel encoding birefringence, optical activity, and depolarisation. The LC-PolScope uses liquid crystal retarders for rapid modulation. Reconstruction involves solving for Mueller elements and Lu-Chipman decomposition into physically meaningful parameters.
Principle
Polarization microscopy exploits the birefringence (orientation-dependent refractive index) of ordered biological structures such as collagen fibers, spindle microtubules, and crystalline inclusions. By analyzing the polarization state of transmitted or reflected light, structural anisotropy can be measured without fluorescent labeling. Quantitative techniques (LC-PolScope) measure both retardance magnitude and slow-axis orientation.
How to Build the System
Mount a liquid-crystal universal compensator (LC-PolScope by OpenPolScope, or Abrio system) on a standard brightfield microscope. Use strain-free optics and rotate the analyzer while keeping the polarizer fixed (or use a rotating stage). For quantitative imaging, acquire 4-5 images at different compensator settings. A monochromatic light source (546 nm green filter) minimizes chromatic effects.
Common Reconstruction Algorithms
- Mueller matrix decomposition (full polarimetric imaging)
- Jones calculus for coherent polarization analysis
- Background retardance subtraction
- Stokes parameter reconstruction from intensity measurements
- Deep-learning retardance estimation from fewer raw frames
Common Mistakes
- Strain birefringence in optical components contaminating the measurement
- Incorrect compensator calibration producing quantitative retardance errors
- Not accounting for sample tilt introducing apparent birefringence artifacts
- Using polychromatic light causing wavelength-dependent retardance errors
- Ignoring depolarization effects in thick or scattering samples
How to Avoid Mistakes
- Use strain-free objectives and verify zero retardance on a blank field
- Calibrate the liquid-crystal compensator at each session using a known retarder
- Ensure sample is flat and perpendicular to the optical axis
- Use narrow-band illumination or measure dispersion for wavelength correction
- For thick samples, consider Mueller matrix imaging to capture depolarization
Forward-Model Mismatch Cases
- The widefield fallback treats light as a scalar intensity, but polarization microscopy measures the full Mueller matrix or Stokes parameters — the vector nature of light (birefringence, dichroism, depolarization) is completely lost
- The fallback produces a single-channel image, but the correct operator generates 4+ channels (Stokes S0-S3 or multiple polarizer/analyzer orientations), each encoding different polarization properties of the sample
How to Correct the Mismatch
- Use the polarization operator that generates images at multiple polarizer/analyzer angles (0, 45, 90, 135 degrees), encoding the sample's Jones or Mueller matrix at each pixel
- Reconstruct birefringence retardance and orientation from the polarization-resolved measurements using Mueller calculus or Jones matrix decomposition
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Mehta et al., 'Quantitative polarized light microscopy using the LC-PolScope', Live Cell Imaging: A Laboratory Manual, CSHL Press (2010)
- Lu & Chipman, 'Interpretation of Mueller matrices based on polar decomposition', J. Opt. Soc. Am. A 13, 1106-1113 (1996)
Canonical Datasets
- OpenPolScope calibration data
- Collagen SHG/polarisation histopathology datasets
Portal Imaging (EPID)
Portal Imaging (EPID)
Positron Emission Tomography
PET images the 3D distribution of a positron-emitting radiotracer (e.g. 18F-FDG) by detecting coincident 511 keV annihilation photon pairs along lines of response (LORs). The forward model is a system matrix encoding the detection probability for each voxel-LOR pair, incorporating attenuation, scatter, randoms, and detector response. Reconstruction uses iterative ML-EM/OSEM algorithms with attenuation correction from co-registered CT. Low count rates yield Poisson noise; time-of-flight (TOF) information improves SNR.
Positron Emission Tomography
Description
PET images the 3D distribution of a positron-emitting radiotracer (e.g. 18F-FDG) by detecting coincident 511 keV annihilation photon pairs along lines of response (LORs). The forward model is a system matrix encoding the detection probability for each voxel-LOR pair, incorporating attenuation, scatter, randoms, and detector response. Reconstruction uses iterative ML-EM/OSEM algorithms with attenuation correction from co-registered CT. Low count rates yield Poisson noise; time-of-flight (TOF) information improves SNR.
Principle
Positron Emission Tomography detects pairs of 511 keV gamma rays emitted in opposite directions when a positron from a radiotracer annihilates with an electron. Coincidence detection of the two photons defines a line of response (LOR). Many LORs from different angles are reconstructed into a 3-D activity distribution map, providing functional and metabolic information.
How to Build the System
A PET scanner consists of a ring of scintillation detector blocks (LYSO or LSO crystals coupled to SiPMs) surrounding the patient. Each detector block has a matrix of small crystals (3-4 mm pitch). Coincidence electronics pair detected events within a timing window (4-6 ns for TOF-PET). Modern digital PET systems achieve 200-300 ps timing resolution for time-of-flight. Daily quality checks include detector normalization, timing calibration, and sensitivity phantom scans.
Common Reconstruction Algorithms
- OSEM (Ordered Subset Expectation Maximization)
- 3D OSEM with resolution modeling (PSF reconstruction)
- TOF-OSEM (time-of-flight enhanced OSEM)
- Attenuation correction from CT (PET/CT) or Dixon MR (PET/MR)
- Deep-learning PET denoising (low-count to full-count prediction)
Common Mistakes
- Incorrect attenuation correction map (misregistration between PET and CT)
- Patient motion between PET and CT causing attenuation-emission mismatch
- Metal artifacts in CT propagating into PET attenuation correction
- Scatter correction errors in patients with large body habitus
- SUV calculation errors from incorrect weight, dose, or timing entries
How to Avoid Mistakes
- Verify PET-CT registration quality; use respiratory gating for thorax/abdomen
- Minimize time between CT and PET acquisitions; co-register if needed
- Use MAR-corrected CT or MR-based attenuation correction to avoid metal artifacts
- Use Monte Carlo scatter correction models validated for the patient population
- Double-check injected dose, patient weight, injection time, and decay correction
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred (64,64) image, but PET acquires sinogram data of shape (n_angles, n_radial) from coincidence detection of annihilation photon pairs — output shape (32,64) vs (64,64)
- PET measurement physics (positron emission → annihilation → 511 keV photon pair → coincidence detection) is fundamentally different from optical blur — the fallback cannot model attenuation correction, scatter, randoms, or detector normalization
How to Correct the Mismatch
- Use the PET operator that models the system matrix: y = A*x + scatter + randoms, where A encodes line-of-response geometry and attenuation
- Reconstruct using OSEM (Ordered Subsets Expectation Maximization) with the correct system matrix, attenuation map, and scatter/randoms estimates
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Shepp & Vardi, 'Maximum likelihood reconstruction for emission tomography', IEEE TMI 1, 113-122 (1982)
- Gatidis et al., 'AutoPET Challenge 2022', MICCAI 2022
Canonical Datasets
- AutoPET Challenge (whole-body FDG-PET/CT)
- TCIA PET/CT collections
Proton Radiography
Proton radiography/CT uses high-energy proton beams (100-250 MeV) to image the relative stopping power (RSP) of tissue, which is the quantity directly needed for proton therapy treatment planning. Unlike X-rays which measure attenuation, proton imaging measures the energy loss and scattering of individual protons as they traverse the object. Each proton's entry/exit position and angle are tracked, and the residual energy is measured. The RSP is reconstructed from many proton histories using iterative algorithms. Challenges include multiple Coulomb scattering (which blurs the spatial resolution to ~1 mm) and the need for single-proton tracking at high rates.
Proton Radiography
Description
Proton radiography/CT uses high-energy proton beams (100-250 MeV) to image the relative stopping power (RSP) of tissue, which is the quantity directly needed for proton therapy treatment planning. Unlike X-rays which measure attenuation, proton imaging measures the energy loss and scattering of individual protons as they traverse the object. Each proton's entry/exit position and angle are tracked, and the residual energy is measured. The RSP is reconstructed from many proton histories using iterative algorithms. Challenges include multiple Coulomb scattering (which blurs the spatial resolution to ~1 mm) and the need for single-proton tracking at high rates.
Principle
Proton radiography images the transmission and scattering of high-energy protons (50-800 MeV) through dense objects. Unlike X-rays, protons undergo significant multiple Coulomb scattering (MCS) in matter, which provides density and compositional contrast. Both transmission (energy loss) and scattering angle measurements contribute to image formation. Proton radiography can penetrate very dense materials (steel, depleted uranium) that are opaque to X-rays.
How to Build the System
Requires a high-energy proton accelerator facility (synchrotron or cyclotron delivering 200-800 MeV protons). The object is placed in the beam path between tracking detectors (silicon strip or GEM detectors) that measure each proton's position and angle before and after the object. A magnetic spectrometer (quadrupole lens system, e.g., at LANL pRad facility) focuses transmitted protons onto a scintillator + camera detector.
Common Reconstruction Algorithms
- Most Likely Path (MLP) estimation for proton CT reconstruction
- Filtered back-projection with scattering-angle weighting
- Algebraic reconstruction (ART) with MCS forward model
- Material discrimination from dual-parameter (transmission + scattering) analysis
- Deep-learning proton CT reconstruction for reduced view angles
Common Mistakes
- Ignoring multiple Coulomb scattering in the reconstruction model, causing blur
- Nuclear interaction losses (protons stopped or scattered out of detector acceptance)
- Insufficient proton statistics leading to noisy images
- Energy straggling not modeled, causing depth-of-field blur in radiography
- Detector alignment errors between upstream and downstream tracking systems
How to Avoid Mistakes
- Use MLP or cubic spline path estimation in iterative reconstruction algorithms
- Account for nuclear interaction losses in the forward model; filter outlier tracks
- Accumulate sufficient proton histories (>10⁶ for radiography, >10⁸ for proton CT)
- Include energy straggling in the forward model or use higher energy protons to reduce it
- Carefully align tracking detectors with survey or use track-based alignment algorithms
Forward-Model Mismatch Cases
- The widefield fallback applies Gaussian blur, but proton radiography measures energy loss and multiple Coulomb scattering (MCS) of high-energy protons traversing the object — the scattering angle distribution encodes areal density, not spatial blur
- Protons lose energy continuously (Bethe-Bloch formula: -dE/dx ~ Z/A * z^2/beta^2) and scatter via Coulomb interaction — the measurement combines transmission intensity, residual energy, and scattering angle, none of which are modeled by optical blur
How to Correct the Mismatch
- Use the proton radiography operator that models energy-dependent proton transport: energy loss via Bethe-Bloch stopping power and angular broadening via Highland MCS formula (theta_rms ~ 13.6 MeV/(p*v) * sqrt(t/X_0))
- Reconstruct water-equivalent path length (WEPL) maps from residual energy measurements, or use scattering radiography for material discrimination — essential for proton therapy treatment planning
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Schulte et al., 'Conceptual design of a proton computed tomography system for applications in proton radiation therapy', IEEE Trans. Nucl. Sci. 51, 866-872 (2004)
Canonical Datasets
- Simulated proton CT phantoms (Penfold et al.)
Proton Therapy Imaging
Proton Therapy Imaging
Ptychographic Imaging
Ptychography is a scanning coherent diffractive imaging technique where a coherent beam (X-ray or electron) illuminates overlapping regions of the sample and far-field diffraction patterns are recorded at each scan position. The overlap between adjacent probe positions provides redundancy that enables simultaneous recovery of the complex-valued object transmission function and the illumination probe via iterative algorithms (ePIE, difference map). The forward model at each position is I_j = |F{P(r-r_j) * O(r)}|^2 where P is the probe and O is the object. Achievable resolution is limited by the detector NA, not the optics, reaching sub-10 nm for X-rays.
Ptychographic Imaging
Description
Ptychography is a scanning coherent diffractive imaging technique where a coherent beam (X-ray or electron) illuminates overlapping regions of the sample and far-field diffraction patterns are recorded at each scan position. The overlap between adjacent probe positions provides redundancy that enables simultaneous recovery of the complex-valued object transmission function and the illumination probe via iterative algorithms (ePIE, difference map). The forward model at each position is I_j = |F{P(r-r_j) * O(r)}|^2 where P is the probe and O is the object. Achievable resolution is limited by the detector NA, not the optics, reaching sub-10 nm for X-rays.
Principle
Ptychography is a scanning coherent diffractive imaging technique where a coherent beam (visible, X-ray, or electron) illuminates overlapping regions of the sample. At each scan position, a far-field diffraction pattern is recorded. The redundancy from overlapping illumination positions constrains the phase-retrieval problem, enabling simultaneous recovery of both the complex sample transmittance and the illumination probe function.
How to Build the System
For X-ray ptychography at a synchrotron: focus the beam to a defined spot (0.1-1 μm) using a Fresnel zone plate or KB mirrors. Mount the sample on a precision piezo scanning stage. Place a photon-counting area detector (Eiger, Pilatus) in the far field (1-5 m downstream). Scan positions should overlap by 60-70 %. For visible-light or electron ptychography, adapt the geometry but maintain the overlap requirement.
Common Reconstruction Algorithms
- ePIE (extended Ptychographic Iterative Engine)
- Difference Map algorithm
- Maximum Likelihood refinement (MLR)
- PtychoShelves (modular framework for ptychographic reconstruction)
- Deep-learning ptychography (PtychoNN, learned phase retrieval)
Common Mistakes
- Insufficient overlap between adjacent scan positions (need ≥60 %)
- Position errors in the scanning stage causing reconstruction artifacts
- Partial coherence effects not modeled, degrading recovered phase
- Vibration or drift during the scan corrupting the diffraction data
- Detector saturation at the central beam stop region
How to Avoid Mistakes
- Maintain ≥65 % overlap; include position correction in the reconstruction algorithm
- Use position refinement (annealing) as part of the ptychographic reconstruction
- Include mixed-state (multi-mode) probe to model partial coherence
- Use interferometric position feedback and short dwell times per point
- Use a semi-transparent beam stop or high-dynamic-range detector modes
Forward-Model Mismatch Cases
- The widefield fallback produces a single (64,64) image, but ptychography acquires diffraction patterns at multiple overlapping scan positions — output shape (n_positions, det_x, det_y) is a set of far-field intensity measurements
- Ptychography is fundamentally nonlinear (y_j = |F{P * O_j}|^2, intensity of Fourier transform of probe times object) — the widefield linear blur cannot model coherent wave propagation, diffraction, or phase retrieval
How to Correct the Mismatch
- Use the ptychography operator that generates one far-field diffraction pattern per probe position, with overlapping illumination enabling redundant phase information for robust reconstruction
- Reconstruct using PIE (Ptychographic Iterative Engine), ePIE, or gradient-descent methods that alternate between real-space (overlap constraint) and Fourier-space (modulus constraint) using the coherent forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Rodenburg & Faulkner, 'A phase retrieval algorithm for shifting illumination (ePIE)', Appl. Phys. Lett. 85, 4795-4797 (2004)
- Thibault et al., 'High-resolution scanning X-ray diffraction microscopy', Science 321, 379-382 (2008)
Canonical Datasets
- PtychoNN benchmark datasets (Cherukara et al.)
- Diamond I13 ptychography test data
Pump-Probe Microscopy
Pump-Probe Microscopy
Quantum Illumination
Quantum Illumination
Radio Aperture Synthesis
Radio Aperture Synthesis
Radio Interferometry (VLBI)
Radio Interferometry (VLBI)
Raman Imaging / Microscopy
Raman Imaging / Microscopy
Scanning Acoustic Microscopy (SAM)
Scanning Acoustic Microscopy (SAM)
Scanning Electron Microscopy
SEM forms images by rastering a focused electron beam (1-30 keV) across the specimen surface and collecting secondary electrons (SE, topographic contrast) or backscattered electrons (BSE, compositional Z-contrast). Resolution is determined by the probe diameter (1-10 nm), accelerating voltage, and interaction volume. Key artifacts include charging in non-conductive specimens, drift, and contamination.
Scanning Electron Microscopy
Description
SEM forms images by rastering a focused electron beam (1-30 keV) across the specimen surface and collecting secondary electrons (SE, topographic contrast) or backscattered electrons (BSE, compositional Z-contrast). Resolution is determined by the probe diameter (1-10 nm), accelerating voltage, and interaction volume. Key artifacts include charging in non-conductive specimens, drift, and contamination.
Principle
Scanning Electron Microscopy rasters a focused electron beam (0.1-30 keV) across the sample surface. Secondary electrons (SE) emitted from the top few nanometers provide topographic contrast, while backscattered electrons (BSE) from deeper interactions reveal compositional contrast (higher Z → more BSE). The image is formed point-by-point, with resolution down to 1-5 nm determined by the probe size.
How to Build the System
Operate a field-emission SEM (FEG-SEM, e.g., Zeiss GeminiSEM, JEOL JSM-7800F) under high vacuum (< 10⁻⁴ Pa). Mount samples on conductive stubs with carbon tape or silver paint. Non-conductive samples must be sputter-coated (5-10 nm Au/Pd or C) to prevent charging. Set accelerating voltage (1-5 kV for surface detail, 10-20 kV for BSE compositional contrast). Select appropriate detectors (Everhart-Thornley for SE, solid-state for BSE). Align the column and perform astigmatism correction.
Common Reconstruction Algorithms
- Noise reduction by frame averaging or Kalman filtering
- Charging artifact compensation (dynamic focus, low-kV imaging)
- 3-D surface reconstruction from stereo-pair SEM images
- Deep-learning SEM denoising (for low-dose or fast-scan images)
- Automated particle analysis and morphometry
Common Mistakes
- Sample charging causing bright streaks and image distortion
- Astigmatism not corrected, producing elongated features
- Excessive beam current damaging or contaminating delicate samples
- Carbon contamination from residual hydrocarbons in the chamber
- Wrong working distance causing suboptimal resolution or depth of field
How to Avoid Mistakes
- Coat non-conductive samples or use low-vacuum/variable-pressure mode
- Correct astigmatism carefully using the wobbler on a recognizable feature
- Use the minimum beam current needed; work at low kV for beam-sensitive samples
- Plasma-clean the chamber and samples; use a cold trap to reduce contamination
- Optimize working distance for the specific detector and resolution requirement
Forward-Model Mismatch Cases
- The widefield fallback applies optical Gaussian blur, but SEM image formation involves electron-sample interaction (secondary electron yield depends on surface topography and composition) — the contrast mechanism is fundamentally different from optical fluorescence
- SEM contrast (SE and BSE signals) depends on accelerating voltage, material Z-number, surface tilt, and detector geometry — the widefield PSF convolution model cannot capture these electron-matter interaction physics
How to Correct the Mismatch
- Use the SEM operator that models the electron probe profile (sub-nm spot) and secondary/backscattered electron yield as a function of local surface topography and composition
- Include the interaction volume (Monte Carlo electron trajectory simulation), detector angular acceptance, and signal mixing between SE (topography) and BSE (composition) channels
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Goldstein et al., 'Scanning Electron Microscopy and X-ray Microanalysis', Springer (2018)
Canonical Datasets
- SEM Dataset for Nanomaterial Segmentation (Aversa et al.)
- NIST SEM calibration images
Scanning Transmission Electron Microscopy
STEM focuses the electron beam to a sub-angstrom probe and scans it across a thin specimen. The HAADF detector collects electrons scattered to large angles (>50 mrad), producing incoherent Z-contrast images where intensity scales as ~Z^1.7, enabling direct compositional interpretation at atomic resolution. Aberration correction (C3/C5 correctors) achieves sub-50 pm probe sizes. Primary degradations include scan distortion, probe instability, and radiation damage.
Scanning Transmission Electron Microscopy
Description
STEM focuses the electron beam to a sub-angstrom probe and scans it across a thin specimen. The HAADF detector collects electrons scattered to large angles (>50 mrad), producing incoherent Z-contrast images where intensity scales as ~Z^1.7, enabling direct compositional interpretation at atomic resolution. Aberration correction (C3/C5 correctors) achieves sub-50 pm probe sizes. Primary degradations include scan distortion, probe instability, and radiation damage.
Principle
Scanning TEM focuses the electron beam to a fine probe (0.05-1 nm) and scans it across the specimen. Multiple detectors collect signals simultaneously: bright-field (BF), annular dark-field (ADF), and high-angle annular dark-field (HAADF). HAADF-STEM provides Z-contrast imaging where intensity scales approximately as Z^1.7, enabling direct interpretation of atomic columns by atomic number.
How to Build the System
Use an aberration-corrected STEM (probe-corrected, e.g., Thermo Fisher Titan Themis or JEOL ARM300F). Align the probe-corrector to minimize C₃ and C₅ aberrations, achieving sub-Ångström probe size. Adjust camera length for HAADF inner angle (typically 50-80 mrad for Z-contrast). Prepare atomically thin specimens by FIB or mechanical exfoliation. Use drift-corrected frame integration for high-quality atomic-resolution images.
Common Reconstruction Algorithms
- Atom column detection and quantification (peak finding, Gaussian fitting)
- Strain mapping via geometric phase analysis (GPA) or peak-pair analysis
- Multi-frame averaging with rigid/non-rigid registration for noise reduction
- HAADF simulation (frozen-phonon multislice) for quantitative comparison
- Deep-learning STEM image denoising and super-resolution
Common Mistakes
- Probe aberrations not fully corrected, producing probe tails and delocalization
- Scan distortion (flyback, drift) causing apparent lattice strain artifacts
- Sample mistilt from zone axis, reducing contrast of atomic columns
- Amorphous surface layers (from FIB damage) obscuring atomic contrast
- Electron channeling effects complicating quantitative HAADF interpretation
How to Avoid Mistakes
- Tune corrector regularly using Zemlin tableau or Ronchigram analysis
- Apply scan distortion correction using known lattice spacings as reference
- Tilt to exact zone axis using CBED pattern or Ronchigram fine alignment
- Use low-kV FIB final polishing or Ar-ion milling to minimize surface damage
- Simulate HAADF images with the exact specimen thickness for quantitative analysis
Forward-Model Mismatch Cases
- The widefield fallback applies a Gaussian PSF blur, but STEM forms images by rastering a focused electron probe (~0.1 nm) and collecting scattered electrons with annular detectors — the contrast depends on detector geometry (BF, ADF, HAADF) not optical PSF shape
- HAADF-STEM contrast is proportional to Z^~1.7 (atomic number contrast), enabling direct chemical imaging — the widefield PSF convolution produces optical-type blur with no Z-contrast information
How to Correct the Mismatch
- Use the STEM operator that models the electron probe profile (aberration-corrected sub-angstrom) and detector-dependent signal collection: ADF integrates scattered electrons over the annular detector range
- For quantitative STEM, include the probe-forming aberration function, thermal diffuse scattering, and detector inner/outer angle to correctly model Z-contrast and strain mapping
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Pennycook & Nellist, 'Z-Contrast STEM Imaging', Springer (2011)
- Krivanek et al., 'Atom-by-atom structural and chemical analysis by annular dark-field electron microscopy', Nature 464, 571 (2010)
Canonical Datasets
- NCEM Molecular Foundry STEM benchmarks
- EMPIAR STEM datasets
Scanning Tunneling Microscopy (STM)
Scanning Tunneling Microscopy (STM)
Second Harmonic Generation (SHG) Microscopy
Second Harmonic Generation (SHG) Microscopy
Secondary Ion Mass Spectrometry (SIMS) Imaging
Secondary Ion Mass Spectrometry (SIMS) Imaging
Seismic Tomography
Seismic Tomography
Shear-Wave Elastography
Shear-wave elastography (SWE) quantifies tissue stiffness by generating shear waves using an acoustic radiation force impulse (ARFI) push and tracking their propagation with ultrafast ultrasound imaging (10,000+ fps). The shear wave speed c_s is related to the shear modulus by mu = rho * c_s^2, enabling quantitative mapping of Young's modulus E = 3*mu (assuming incompressibility). The technique is clinically validated for liver fibrosis staging (F0-F4) and breast lesion characterization. Challenges include shear wave attenuation in deep tissue and reflections from boundaries.
Shear-Wave Elastography
Description
Shear-wave elastography (SWE) quantifies tissue stiffness by generating shear waves using an acoustic radiation force impulse (ARFI) push and tracking their propagation with ultrafast ultrasound imaging (10,000+ fps). The shear wave speed c_s is related to the shear modulus by mu = rho * c_s^2, enabling quantitative mapping of Young's modulus E = 3*mu (assuming incompressibility). The technique is clinically validated for liver fibrosis staging (F0-F4) and breast lesion characterization. Challenges include shear wave attenuation in deep tissue and reflections from boundaries.
Principle
Shear-wave elastography measures tissue stiffness by tracking the propagation speed of shear waves generated by an acoustic radiation force impulse (ARFI) or external vibration. Shear-wave speed is proportional to the square root of the shear modulus: cₛ = √(μ/ρ). Stiffer tissues (fibrosis, tumors) have faster shear-wave propagation. Results are displayed as quantitative elasticity maps (in kPa or m/s).
How to Build the System
Use a clinical ultrasound system with shear-wave elastography mode (Supersonic Imagine Aixplorer, Siemens ARFI/VTQ, or GE 2D-SWE). The transducer generates a focused push pulse to create shear waves, then tracks their propagation with ultrafast plane-wave imaging (up to 10,000 fps). Place the ROI in a region free of large vessels and interfaces. Patient should hold breath for liver measurements. Calibrate with an elasticity phantom.
Common Reconstruction Algorithms
- Time-to-peak shear-wave arrival estimation
- Phase-gradient shear-wave speed inversion
- 2-D shear-wave elastography mapping (real-time SWE)
- Transient elastography (FibroScan 1-D measurement)
- Deep-learning elasticity estimation from B-mode + SWE data
Common Mistakes
- Pre-compression by pressing transducer too hard, artifactually increasing stiffness
- Measuring in the near-field where push pulse is unreliable
- Not having patient hold breath for liver measurements (respiratory motion invalidates SWE)
- Placing ROI near large vessels or liver capsule causing boundary artifacts
- Not waiting for the measurement to stabilize (IQR/median >30 % indicates unreliable data)
How to Avoid Mistakes
- Apply light transducer pressure with coupling gel; avoid compressing tissue
- Place measurement ROI at 1.5-2 cm depth in liver; avoid the near-field zone
- Instruct patient to suspend breathing calmly during each SWE measurement
- Avoid ROI placement near vessels, liver edges, or ribs
- Acquire ≥10 valid measurements and check IQR/median <30 % per EFSUMB guidelines
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) image, but elastography measures tissue displacement/strain from mechanical wave propagation — output includes displacement maps at multiple time points
- Elastography estimates tissue stiffness (Young's modulus) from shear wave speed, which requires tracking mechanical wave propagation through tissue — the widefield Gaussian blur has no connection to mechanical wave physics
How to Correct the Mismatch
- Use the elastography operator that models mechanical excitation (acoustic radiation force or external vibration) and tracks the resulting tissue displacement using ultrasound or MRI phase encoding
- Estimate shear wave speed from displacement propagation, then compute tissue stiffness: E = 3*rho*c_s^2, using the correct wave propagation and displacement tracking forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Bercoff et al., 'Supersonic shear imaging: a new technique for soft tissue elasticity mapping', IEEE TUFFC 51, 396-409 (2004)
- Barr et al., 'Elastography assessment of liver fibrosis', Radiology 276, 845-861 (2015)
Canonical Datasets
- Clinical SWE liver fibrosis benchmark data
Shearography
Shearography
Single Photon Emission Computed Tomography
SPECT images the 3D distribution of a gamma-emitting radiotracer (e.g. 99mTc-sestamibi) by detecting single photons with rotating gamma cameras equipped with parallel-hole collimators. The collimator creates a projection of the activity distribution, and multiple angles enable tomographic reconstruction. The forward model includes collimator response (depth-dependent blurring), photon attenuation, and scatter. Reconstruction uses OSEM with corrections for attenuation (AC), scatter (SC), and resolution recovery (RR).
Single Photon Emission Computed Tomography
Description
SPECT images the 3D distribution of a gamma-emitting radiotracer (e.g. 99mTc-sestamibi) by detecting single photons with rotating gamma cameras equipped with parallel-hole collimators. The collimator creates a projection of the activity distribution, and multiple angles enable tomographic reconstruction. The forward model includes collimator response (depth-dependent blurring), photon attenuation, and scatter. Reconstruction uses OSEM with corrections for attenuation (AC), scatter (SC), and resolution recovery (RR).
Principle
Single Photon Emission Computed Tomography detects single gamma-ray photons emitted by a radiotracer (⁹⁹ᵐTc, ¹²³I, ²⁰¹Tl) using a rotating gamma camera with a parallel-hole or pinhole collimator. The collimator provides directional sensitivity at the cost of low geometric efficiency (~0.01 %). Projections from multiple angles are reconstructed into 3-D activity maps.
How to Build the System
A dual-head gamma camera (e.g., Siemens Symbia, GE Discovery) with NaI(Tl) scintillator crystals (9.5 mm thick) and parallel-hole collimators rotates around the patient (typically 60-128 angular stops over 360°). For cardiac SPECT, use dedicated CZT-based cameras with pinhole or multi-pinhole collimators. Acquire in step-and-shoot or continuous rotation mode. Energy windows are set around the photopeak (e.g., 140 keV ± 10 % for ⁹⁹ᵐTc).
Common Reconstruction Algorithms
- FBP with ramp-Butterworth filter
- OSEM with attenuation and scatter correction
- Resolution recovery (collimator-detector response modeling in OSEM)
- CT-based attenuation correction (SPECT/CT)
- Deep-learning SPECT reconstruction (dose reduction, resolution enhancement)
Common Mistakes
- Insufficient count statistics causing noisy, unreliable reconstructions
- Not correcting for depth-dependent collimator blur (resolution degrades with distance)
- Attenuation artifacts in uncorrected SPECT (false defects in myocardial perfusion)
- Patient motion during the long SPECT acquisition (15-30 minutes)
- Incorrect energy window or scatter window setup leading to poor image quality
How to Avoid Mistakes
- Ensure adequate injected dose and acquisition time for sufficient count statistics
- Use resolution recovery (distance-dependent PSF modeling) in iterative reconstruction
- Apply CT-based attenuation correction; verify CT-SPECT registration
- Use motion detection and correction algorithms; shorter acquisitions with CZT cameras
- Verify energy window settings match the radionuclide photopeak and scatter windows
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred (64,64) image, but SPECT acquires projections of shape (n_angles, n_detectors) using a rotating gamma camera with collimator — output shape (32,64) vs (64,64)
- SPECT measurement involves collimated gamma-ray detection with depth-dependent spatial resolution (the collimator PSF broadens with distance) — the widefield spatially-invariant Gaussian blur cannot model this depth-dependent response
How to Correct the Mismatch
- Use the SPECT operator that models collimated gamma-ray projection with distance-dependent resolution: y(theta,s) = integral of (h(d) * f) along projection rays for each angle
- Reconstruct using OSEM with depth-dependent collimator-detector response modeling and attenuation correction (Chang method or CT-based mu-map)
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Hudson & Larkin, 'Accelerated image reconstruction using ordered subsets of projection data (OSEM)', IEEE TMI 13, 601-609 (1994)
Canonical Datasets
- Clinical SPECT benchmark collections
Single-Pixel Camera
The single-pixel camera reconstructs a 2D image from scalar intensity measurements acquired by a photodiode after spatially modulating the scene with known patterns on a DMD. Each measurement y_i is the inner product of the scene with a pattern, giving y = Phi*x + n. Compressed sensing theory guarantees recovery from M << N measurements if the scene is sparse. The single detector can operate at wavelengths where array detectors are unavailable (SWIR, THz). Reconstruction uses FISTA with L1/TV penalties or Plug-and-Play methods.
Single-Pixel Camera
Description
The single-pixel camera reconstructs a 2D image from scalar intensity measurements acquired by a photodiode after spatially modulating the scene with known patterns on a DMD. Each measurement y_i is the inner product of the scene with a pattern, giving y = Phi*x + n. Compressed sensing theory guarantees recovery from M << N measurements if the scene is sparse. The single detector can operate at wavelengths where array detectors are unavailable (SWIR, THz). Reconstruction uses FISTA with L1/TV penalties or Plug-and-Play methods.
Principle
A single-pixel camera uses a spatial light modulator (DMD) to project a sequence of binary or grayscale patterns onto the scene. Each pattern multiplies the scene, and a single bucket detector (photodiode or PMT) measures the total light for each pattern, producing one scalar measurement per pattern. Compressive sensing recovers the image from far fewer measurements than Nyquist by exploiting sparsity in a transform domain.
How to Build the System
Place a DMD (e.g., Texas Instruments DLP LightCrafter) at the image plane of a relay lens. Focus the scene onto the DMD. After the DMD, collect all reflected light onto a single photodetector (avalanche photodiode for low light, or silicon photodiode for visible). Display Hadamard, random, or optimized patterns at 10-22 kHz DMD rate. Synchronize pattern display with detector readout.
Common Reconstruction Algorithms
- Basis pursuit / L1 minimization (LASSO)
- Orthogonal matching pursuit (OMP)
- Total-variation minimization (TV-CS)
- TVAL3 (TV with augmented Lagrangian and alternating direction)
- Deep compressive sensing networks (ReconNet, CSNet)
Common Mistakes
- Pattern-detector timing mismatch causing wrong measurement-to-pattern association
- DMD diffraction effects not accounted for at oblique illumination angles
- Insufficient measurements for the scene complexity (under-sampling ratio too aggressive)
- Analog-to-digital converter resolution too low for the dynamic range of measurements
- Not calibrating detector linearity and dark current drift during long acquisitions
How to Avoid Mistakes
- Hardware-trigger the detector acquisition from the DMD synchronization signal
- Calibrate the effective pattern at the sample plane (not just the DMD command pattern)
- Start with 25-50 % measurement ratio for natural scenes; reduce only if sparsity allows
- Use 16-bit or higher ADC; verify linearity with a calibrated light source
- Measure dark frames periodically and subtract; maintain stable detector temperature
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) image, but single-pixel camera acquires a 1D vector of M scalar measurements (M << N pixels) via structured illumination patterns and a single photodetector — output shape (M,) vs (64,64)
- Each SPC measurement is an inner product of the scene with a known pattern (y_i = <phi_i, x>), capturing compressed information — the widefield blur produces N^2 pixels with no compression, making compressive reconstruction algorithms incompatible
How to Correct the Mismatch
- Use the SPC operator that applies the sensing matrix Phi (Hadamard, random, or learned patterns): y = Phi * x, where y has far fewer entries than the image has pixels
- Reconstruct using compressive sensing algorithms (ISTA-Net, basis pursuit, total variation) that exploit sparsity to recover the N^2-pixel image from M << N^2 measurements
Experimental Setup — Signal Chain
Experimental Setup — Details
Benchmark Variants
Key References
- Duarte et al., 'Single-pixel imaging via compressive sampling', IEEE Signal Processing Magazine 25, 83-91 (2008)
- Edgar et al., 'Principles and prospects for single-pixel imaging', Nature Photonics 13, 13-20 (2019)
Canonical Datasets
- Set11 (11 standard test images)
- BSD68 (Martin et al., ICCV 2001)
Small-Angle X-ray Scattering (SAXS)
Small-Angle X-ray Scattering (SAXS)
Solar EUV/X-ray Imaging
Solar EUV/X-ray Imaging
Sonar Imaging
Side-scan sonar maps the seabed by transmitting acoustic pulses perpendicular to the survey vessel's track and recording the backscattered energy as a function of time (range). The along-track resolution is determined by the beam width, while the across-track resolution comes from the pulse length. The sonar image is a 2D acoustic backscatter map where intensity encodes seabed roughness, composition, and the presence of objects. Acoustic shadows behind elevated objects provide height information. Challenges include multipath reflections, variable sound speed profile, and non-uniform ensonification.
Sonar Imaging
Description
Side-scan sonar maps the seabed by transmitting acoustic pulses perpendicular to the survey vessel's track and recording the backscattered energy as a function of time (range). The along-track resolution is determined by the beam width, while the across-track resolution comes from the pulse length. The sonar image is a 2D acoustic backscatter map where intensity encodes seabed roughness, composition, and the presence of objects. Acoustic shadows behind elevated objects provide height information. Challenges include multipath reflections, variable sound speed profile, and non-uniform ensonification.
Principle
Sonar imaging uses acoustic waves (typically 50 kHz to 1 MHz) to image underwater scenes. Active sonar transmits a sound pulse and records the echoes from the seabed, objects, or water column. The propagation speed in water (~1500 m/s, varying with temperature, salinity, and pressure) determines the time-to-distance relationship. Side-scan sonar and multibeam bathymetry produce 2-D and 3-D maps of the underwater environment.
How to Build the System
For side-scan sonar: mount a towfish with two transducer arrays (port and starboard) that ensonify a swath perpendicular to the survey track. For multibeam: mount a hull-mounted array (e.g., Kongsberg EM2040, 200-400 kHz). Sound velocity profiler (SVP) measurements are essential for ray-tracing corrections. Integrate with GNSS positioning and motion reference unit (MRU) for heave, pitch, and roll compensation.
Common Reconstruction Algorithms
- Beamforming (delay-and-sum for multibeam sonar)
- Synthetic aperture sonar (SAS) processing for enhanced azimuth resolution
- Bottom detection and bathymetric surface extraction
- Acoustic backscatter classification for seabed characterization
- Deep-learning object detection for mine countermeasures or marine archaeology
Common Mistakes
- Incorrect sound velocity profile causing depth and position errors
- Multipath reflections (surface bounce, bottom bounce) creating ghost targets
- Nadir gap (directly beneath the sonar) with no acoustic coverage
- Motion artifacts from ship heave/pitch/roll not compensated
- Side-lobe artifacts creating false targets near strong reflectors
How to Avoid Mistakes
- Measure SVP at the survey site; update periodically during long surveys
- Use multiple-return filtering and angle-based discrimination to remove multipath
- Overlap adjacent swaths to fill the nadir gap; use a vertical beam sounder
- Apply real-time MRU data for heave, pitch, and roll correction of depth measurements
- Use advanced beamforming (CAPON, MVDR) to suppress side-lobe responses
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) image, but sonar acquires 1D time-domain acoustic echo signals per beam — output shape reflects beamformed acoustic returns, not a spatial image
- Sonar measurement involves acoustic wave propagation in water (c~1500 m/s, varying with temperature/salinity/pressure) with range-dependent attenuation and multipath — the optical-domain widefield blur has no connection to underwater acoustics
How to Correct the Mismatch
- Use the sonar operator that models acoustic pulse transmission, seabed/target reflection, and receive beamforming: time-of-arrival encodes range, beam angle encodes bearing
- Form sonar images using beamforming (delay-and-sum), SAS (synthetic aperture sonar) processing, or bathymetric extraction algorithms that require correct acoustic echo data format
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Blondel, 'The Handbook of Sidescan Sonar', Springer (2009)
Canonical Datasets
- UATD underwater acoustic target detection dataset
- S3Simulator synthetic sonar (2024)
SPC-Block
SPC-Block
SPC-Kronecker
SPC-Kronecker
SPECT/CT
SPECT/CT
Spectral CT
Spectral CT
Spinning Disk Confocal Microscopy
Spinning Disk Confocal Microscopy
STED Microscopy
Stimulated emission depletion (STED) microscopy breaks the diffraction limit by overlaying the excitation focus with a doughnut-shaped depletion beam that forces fluorophores at the periphery back to the ground state via stimulated emission, effectively shrinking the fluorescent spot to 50 nm or below. The effective PSF width scales as d ~ lambda/(2*NA*sqrt(1 + I/I_s)) where I is the depletion intensity and I_s is the saturation intensity. Primary challenges include high depletion laser power causing photobleaching, and the photon-limited signal from the confined volume.
STED Microscopy
Description
Stimulated emission depletion (STED) microscopy breaks the diffraction limit by overlaying the excitation focus with a doughnut-shaped depletion beam that forces fluorophores at the periphery back to the ground state via stimulated emission, effectively shrinking the fluorescent spot to 50 nm or below. The effective PSF width scales as d ~ lambda/(2*NA*sqrt(1 + I/I_s)) where I is the depletion intensity and I_s is the saturation intensity. Primary challenges include high depletion laser power causing photobleaching, and the photon-limited signal from the confined volume.
Principle
Stimulated Emission Depletion microscopy breaks the diffraction limit by using a donut-shaped depletion beam to force fluorophores at the periphery of the excitation spot back to the ground state via stimulated emission. Only fluorophores at the very center of the donut emit spontaneously, shrinking the effective PSF to 30-70 nm lateral resolution depending on depletion power.
How to Build the System
Combine an excitation laser (e.g., 640 nm pulsed) with a co-aligned depletion laser (775 nm pulsed, ~1 ns) that passes through a vortex phase plate to create the donut. Use a high-NA objective (100x 1.4 NA oil). Time-gate detection (1-6 ns after excitation pulse) to reject depletion photon leakage. Single-photon counting detectors (APDs or hybrid PMTs) are essential. Align the donut null precisely at the excitation center.
Common Reconstruction Algorithms
- Richardson-Lucy deconvolution with STED PSF
- Wiener deconvolution with known STED PSF
- Deep-learning restoration (content-aware STED denoising)
- Linear unmixing for multi-color STED
- Time-gated STED (g-STED) background subtraction
Common Mistakes
- Misaligned donut null causing asymmetric PSF and resolution loss
- Excessive depletion power causing photobleaching of organic dyes
- Depletion laser leaking into fluorescence detection channel
- Insufficient time-gating, recording stimulated emission as signal
- Using fluorophores with poor STED compatibility (low stimulated-emission cross-section)
How to Avoid Mistakes
- Regularly check and optimize donut alignment using gold nanoparticle scattering
- Use STED-optimized dyes (ATTO647N, SiR, Abberior STAR) and minimize power
- Install proper spectral filters and use time-gating to reject depletion photons
- Apply 1-6 ns detection gate synchronized with the pulsed excitation
- Choose fluorophores specifically designed for STED with high photostability
Forward-Model Mismatch Cases
- The widefield fallback uses a diffraction-limited PSF (sigma=2.0, ~250 nm resolution), but STED achieves 30-70 nm resolution by shrinking the effective PSF with the depletion donut — the fallback is 4-8x wider
- The STED effective PSF depends on depletion beam power (d_eff = d_confocal / sqrt(1 + I_STED/I_sat)), making it fundamentally different from any fixed Gaussian — the fallback cannot model power-dependent resolution
How to Correct the Mismatch
- Use the STED operator with the effective PSF that accounts for depletion beam intensity: PSF_eff has FWHM = lambda/(2*NA*sqrt(1 + I/I_sat)), typically 30-70 nm
- Include the donut-shaped depletion profile and saturation intensity in the forward model; deconvolution with the correct sub-diffraction STED PSF recovers true super-resolution information
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Hell & Wichmann, 'Breaking the diffraction resolution limit by stimulated emission', Optics Letters 19, 780-782 (1994)
- Vicidomini et al., 'STED nanoscopy', Annual Review of Biophysics 47, 377-404 (2018)
Canonical Datasets
- BioSR STED paired dataset (Zhang et al., Nature Methods 2023)
- Abberior STED application note sample images
Stellar Coronagraphy
Stellar Coronagraphy
STEM-EDX Elemental Mapping
STEM-EDX Elemental Mapping
Stimulated Raman Scattering (SRS) Microscopy
Stimulated Raman Scattering (SRS) Microscopy
Streak Camera Imaging
Streak Camera Imaging
Structured Illumination Microscopy
Structured illumination microscopy (SIM) achieves ~2x lateral resolution improvement by illuminating the sample with sinusoidal patterns at multiple orientations and phases. Frequency mixing between the illumination pattern and sample structure shifts high-frequency information into the microscope passband. Reconstruction separates and reassembles frequency components via Wiener-SIM or deep-learning SIM. The forward model is y_k = PSF ** (I_k * x) + n for each pattern k.
Structured Illumination Microscopy
Description
Structured illumination microscopy (SIM) achieves ~2x lateral resolution improvement by illuminating the sample with sinusoidal patterns at multiple orientations and phases. Frequency mixing between the illumination pattern and sample structure shifts high-frequency information into the microscope passband. Reconstruction separates and reassembles frequency components via Wiener-SIM or deep-learning SIM. The forward model is y_k = PSF ** (I_k * x) + n for each pattern k.
Principle
Structured Illumination Microscopy projects a known sinusoidal pattern onto the specimen, shifting high-frequency spatial information into the observable passband via Moiré interference. Multiple images (typically 9-15) are acquired at different pattern orientations and phases, then computationally recombined in Fourier space to achieve ~2× lateral resolution improvement beyond the diffraction limit.
How to Build the System
Install a SIM-capable microscope (Nikon N-SIM, Zeiss Elyra 7, or custom with SLM/DMD). Use a high-NA objective (100x 1.49 NA TIRF) for maximum frequency extension. The illumination grating (SLM or fiber interference) generates the sinusoidal pattern. Acquire 3 orientations × 3-5 phases. A fast sCMOS camera captures all raw frames in ~100-500 ms for 2D-SIM. Careful alignment of the pattern contrast is critical.
Common Reconstruction Algorithms
- Gustafsson/Heintzmann frequency-domain SIM reconstruction
- Open-source fairSIM (ImageJ plugin)
- Wiener-filtered order separation and recombination
- Deep-learning SIM (ML-SIM, reconstruction from fewer frames)
- Hessian-SIM for live-cell with reduced artifacts
Common Mistakes
- Insufficient pattern contrast causing weak Moiré fringes and honeycomb artifacts
- Misaligned illumination orders producing stripe artifacts in the reconstruction
- Over-processing (too aggressive Wiener parameter) creating ringing artifacts
- Using objectives with insufficient NA for the desired resolution gain
- Photobleaching between pattern acquisitions causing intensity inconsistency
How to Avoid Mistakes
- Verify pattern contrast >0.5 on a thin uniform fluorescent layer before experiments
- Calibrate illumination pattern positions/angles using SIMcheck (ImageJ plugin)
- Tune the Wiener parameter conservatively; use SIMcheck to assess reconstruction quality
- Use 1.49 NA objectives for maximum resolution; 1.40 NA limits SIM performance
- Minimize total acquisition time; use fast cameras and short exposures
Forward-Model Mismatch Cases
- The widefield fallback produces a single (64,64) blurred image, but SIM requires 9-15 raw frames (3 orientations x 3-5 phases) with structured illumination patterns — output shape (64,64,9) vs (64,64)
- Without the sinusoidal illumination pattern encoding, the high-frequency information that SIM moves into the passband via Moiré interference is completely absent — no super-resolution is possible
How to Correct the Mismatch
- Use the SIM operator that generates multiple pattern-modulated images: y_k = (1 + m*cos(k_i*r + phi_j)) * (PSF ** x) for each orientation i and phase j
- Reconstruct using Fourier-space order separation and recombination (Gustafsson method) or deep-learning SIM, which require the correct multi-frame structured illumination forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Gustafsson, 'Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy', J. Microsc. 198, 82-87 (2000)
- Muller & Bhatt, 'Open-source image reconstruction of super-resolution structured illumination microscopy data (fairSIM)', Nature Comms 7, 10980 (2016)
Canonical Datasets
- BioSR SIM paired dataset (Zhang et al., Nature Methods 2023)
- fairSIM test datasets (Hagen et al.)
Structured-Light Depth Camera
Structured-light depth cameras project a known pattern (IR dot pattern, fringe, or binary code) onto the scene and infer depth from the pattern deformation observed by a camera offset from the projector. For coded structured light (e.g., Kinect v1), depth is computed via triangulation from the correspondence between projected and observed pattern features. For phase-shifting methods, multiple fringe patterns encode depth as the local phase. Primary challenges include occlusion in the projector-camera baseline, ambient light interference, and depth discontinuity errors.
Structured-Light Depth Camera
Description
Structured-light depth cameras project a known pattern (IR dot pattern, fringe, or binary code) onto the scene and infer depth from the pattern deformation observed by a camera offset from the projector. For coded structured light (e.g., Kinect v1), depth is computed via triangulation from the correspondence between projected and observed pattern features. For phase-shifting methods, multiple fringe patterns encode depth as the local phase. Primary challenges include occlusion in the projector-camera baseline, ambient light interference, and depth discontinuity errors.
Principle
Structured-light depth sensing projects a known pattern (stripes, dots, coded binary patterns) onto the scene and observes the pattern deformation with a camera from a different viewpoint. The displacement (disparity) of each pattern element between projected and observed positions encodes the surface depth via triangulation. Dense depth maps are obtained by identifying pattern correspondences across the scene.
How to Build the System
Arrange a projector (DLP or laser dot projector) and camera with a known baseline separation (5-25 cm) and convergent geometry. Calibrate the projector-camera system (intrinsics and extrinsics) using a planar calibration target. For temporal coding (Gray code), project multiple patterns sequentially. For spatial coding (single-shot, e.g., Apple FaceID dot projector), use a diffractive optical element to generate a unique dot pattern.
Common Reconstruction Algorithms
- Gray code + phase shifting (sequential multi-pattern decoding)
- Single-shot coded pattern matching (speckle or pseudo-random dot decoding)
- Phase unwrapping for sinusoidal fringe projection
- Stereo matching applied to textured scenes (active stereo)
- Deep-learning depth estimation from structured-light patterns
Common Mistakes
- Ambient light washing out the projected pattern, losing depth information
- Specular (shiny) surfaces reflecting the projector into the camera, causing erroneous depth
- Occlusion zones where the projector illuminates but the camera cannot see (shadowed regions)
- Insufficient projector resolution limiting the achievable depth precision
- Color/reflectance variations in the scene altering perceived pattern intensity
How to Avoid Mistakes
- Use NIR projector + camera with ambient-light rejection filter
- Apply polarization filtering or spray surfaces with matte coating for calibration
- Add a second camera or projector to reduce occlusion zones
- Use high-resolution projectors (1080p+) and fine patterns for sub-mm precision
- Use binary or phase-shifting patterns that are robust to reflectance variations
Forward-Model Mismatch Cases
- The widefield fallback applies spatial blur, but structured-light depth sensing projects known patterns and measures their deformation via triangulation — the depth-encoding pattern correspondence between projector and camera is absent
- Structured light extracts depth from disparity between projected and observed pattern positions (d = f*B/disparity) — the widefield blur produces no disparity information and cannot encode surface depth
How to Correct the Mismatch
- Use the structured-light operator that models pattern projection (Gray code, sinusoidal fringe, or speckle) and camera observation from a different viewpoint: depth is encoded in pattern deformation due to surface geometry
- Extract depth maps using pattern decoding (Gray code → correspondence → triangulation) or phase unwrapping (sinusoidal fringe → depth) with calibrated projector-camera geometry
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Geng, 'Structured-light 3D surface imaging: a tutorial', Advances in Optics and Photonics 3, 128-160 (2011)
Canonical Datasets
- Middlebury stereo benchmark
- ETH3D multi-view stereo benchmark
Susceptibility-Weighted Imaging (SWI)
Susceptibility-Weighted Imaging (SWI)
Synthetic Aperture Radar
SAR synthesizes a large antenna aperture by combining coherent radar returns collected as the platform (satellite/aircraft) moves along its flight path. The azimuth resolution is achieved by coherent integration of the Doppler history, while range resolution comes from pulse compression (chirp). The forward model is a 2D convolution with the SAR impulse response in range and azimuth. SAR images exhibit speckle noise (multiplicative, fully developed) from coherent interference of distributed scatterers. Applications include Earth observation, terrain mapping, and interferometric displacement measurement.
Synthetic Aperture Radar
Description
SAR synthesizes a large antenna aperture by combining coherent radar returns collected as the platform (satellite/aircraft) moves along its flight path. The azimuth resolution is achieved by coherent integration of the Doppler history, while range resolution comes from pulse compression (chirp). The forward model is a 2D convolution with the SAR impulse response in range and azimuth. SAR images exhibit speckle noise (multiplicative, fully developed) from coherent interference of distributed scatterers. Applications include Earth observation, terrain mapping, and interferometric displacement measurement.
Principle
Synthetic Aperture Radar achieves fine azimuth resolution by coherently processing radar echoes collected as the antenna moves along its flight path, synthesizing an aperture much larger than the physical antenna. The SAR signal processor applies matched filtering (pulse compression) in both range and azimuth to form a high-resolution complex image. SAR operates through clouds, at night, and in all weather conditions.
How to Build the System
Mount a microwave transmitter/receiver (C-band 5.4 GHz, L-band 1.3 GHz, or X-band 9.6 GHz) on a satellite (Sentinel-1, RADARSAT) or aircraft. The antenna illuminates a strip on the ground as the platform moves. Record the complex (I/Q) echo data with precise pulse timing and platform position/velocity from GNSS/INS. Range resolution is set by pulse bandwidth (1-200 MHz); azimuth resolution equals L_ant/2 (half the antenna length).
Common Reconstruction Algorithms
- Range-Doppler algorithm (range compression + azimuth compression)
- Chirp scaling algorithm for wide-swath SAR
- Omega-K (wavenumber domain) algorithm for high-resolution spotlight SAR
- InSAR (Interferometric SAR) for DEM generation and deformation mapping
- PolSAR decomposition (Cloude-Pottier, Freeman-Durden) for land classification
Common Mistakes
- Incorrect motion compensation causing azimuth defocusing
- Range cell migration not properly corrected for squinted geometries
- Phase errors from atmospheric delay (troposphere, ionosphere) in InSAR
- Ambiguities (range or azimuth) from incorrect PRF selection
- Speckle noise mistaken for real features in SAR imagery
How to Avoid Mistakes
- Use precise INS/GNSS data for autofocus and motion compensation
- Apply appropriate RCMC (Range Cell Migration Correction) for the imaging geometry
- Use atmospheric phase screens (from weather models or GNSS delays) for InSAR correction
- Design PRF to avoid range and azimuth ambiguity constraints for the swath geometry
- Apply multi-look or speckle filtering (Lee, refined-Lee) before interpretation
Forward-Model Mismatch Cases
- The widefield fallback produces a real-valued blurred image, but SAR acquires complex-valued (I/Q) radar echoes that require coherent pulse compression in range and azimuth — the phase information essential for InSAR and coherent processing is lost
- SAR image formation requires matched filtering with the transmitted chirp waveform and Doppler history — the widefield spatial blur cannot model microwave scattering, range-Doppler processing, or speckle statistics
How to Correct the Mismatch
- Use the SAR operator that models coherent radar echo formation: each pixel's complex return includes amplitude (backscatter cross-section) and phase (range + Doppler history), requiring range and azimuth compression
- Process using range-Doppler, chirp scaling, or omega-K algorithms for image formation; preserve complex data for InSAR, PolSAR, and coherence-based applications
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Cumming & Wong, 'Digital Processing of Synthetic Aperture Radar Data', Artech House (2005)
- Torres et al., 'GMES Sentinel-1 mission', Remote Sensing of Environment 120, 9-24 (2012)
Canonical Datasets
- SEN12MS (Schmitt et al., multi-modal Sentinel-1/2)
- SpaceNet 6 (SAR building footprints)
Talbot-Lau X-ray Grating Interferometry
Talbot-Lau X-ray Grating Interferometry
Terahertz Imaging (THz)
Terahertz Imaging (THz)
Three-Photon Microscopy
Three-Photon Microscopy
Time-of-Flight Depth Camera
ToF cameras measure per-pixel depth by emitting modulated near-infrared light and measuring the phase delay of the reflected signal relative to the emitted signal. In amplitude-modulated continuous-wave (AMCW) ToF, the phase offset phi = 2*pi*f*2d/c encodes the round-trip distance 2d. Multiple modulation frequencies resolve depth ambiguity. Primary degradations include multi-path interference (MPI), motion blur, and systematic errors at depth discontinuities (flying pixels).
Time-of-Flight Depth Camera
Description
ToF cameras measure per-pixel depth by emitting modulated near-infrared light and measuring the phase delay of the reflected signal relative to the emitted signal. In amplitude-modulated continuous-wave (AMCW) ToF, the phase offset phi = 2*pi*f*2d/c encodes the round-trip distance 2d. Multiple modulation frequencies resolve depth ambiguity. Primary degradations include multi-path interference (MPI), motion blur, and systematic errors at depth discontinuities (flying pixels).
Principle
A Time-of-Flight depth camera measures the round-trip time of modulated light (typically near-infrared LEDs at 850 nm) reflected from the scene. The sensor measures the phase shift between emitted and received modulated signals at each pixel, which is proportional to the target distance: d = c·Δφ/(4π·f_mod). Typical modulation frequencies are 20-100 MHz, providing depth ranges of 0.5-10 meters with mm-cm precision.
How to Build the System
Use an integrated ToF camera module (e.g., Microsoft Azure Kinect DK, PMD CamBoard pico, Texas Instruments OPT8241). The module contains the NIR light source, modulation driver, and ToF sensor with per-pixel demodulation circuits. Mount rigidly and calibrate intrinsic parameters (lens distortion, depth offset) and phase-to-depth nonlinearities. For multi-camera setups, synchronize or frequency-multiplex to avoid interference.
Common Reconstruction Algorithms
- Four-phase demodulation for distance extraction
- Multi-frequency unwrapping for extended unambiguous range
- Flying-pixel filtering (mixed pixels at depth discontinuities)
- Multi-path interference correction
- Deep-learning depth denoising and completion
Common Mistakes
- Multi-path interference causing systematic depth errors in concave scenes
- Flying pixels at depth edges producing incorrect intermediate depth values
- Phase wrapping ambiguity when objects exceed the unambiguous range
- Interference from ambient NIR light (sunlight) degrading outdoor performance
- Systematic depth errors from non-ideal sensor response not calibrated out
How to Avoid Mistakes
- Use multi-path correction algorithms or multi-frequency modulation
- Apply flying-pixel detection and removal based on amplitude and neighbor consistency
- Use dual-frequency operation to extend the unambiguous range
- Use narrow-band optical filter and higher modulation power for outdoor use
- Perform per-pixel depth calibration with a known flat reference at multiple distances
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D intensity image, but ToF cameras measure depth via phase shift of modulated near-infrared light — the distance information (d = c*dphi/(4*pi*f_mod)) is entirely absent from the blurred image
- ToF measurement involves demodulation of the reflected modulated signal at each pixel, producing amplitude, phase, and confidence maps — the widefield intensity-only blur cannot produce depth or distinguish multi-path interference
How to Correct the Mismatch
- Use the ToF camera operator that models modulated illumination and per-pixel demodulation: four-phase sampling extracts the phase shift proportional to target distance at each pixel
- Apply phase-to-depth conversion, multi-path correction, and flying-pixel filtering using the correct modulation frequency, amplitude, and phase measurement model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Hansard et al., 'Time-of-Flight Cameras: Principles, Methods and Applications', Springer (2013)
Canonical Datasets
- NYU Depth V2 (Silberman et al.)
- KITTI depth benchmark (adapted)
TIRF Microscopy
Total internal reflection fluorescence (TIRF) microscopy selectively excites fluorophores within ~100-200 nm of the coverslip surface using the evanescent field generated when excitation light undergoes total internal reflection at the glass-sample interface. This provides exceptional axial selectivity for imaging membrane-associated events such as vesicle fusion and focal adhesions. The lateral image follows standard widefield PSF convolution but with near-zero out-of-focus background. Primary degradations include non-uniform evanescent field and interference fringes from coherent illumination.
TIRF Microscopy
Description
Total internal reflection fluorescence (TIRF) microscopy selectively excites fluorophores within ~100-200 nm of the coverslip surface using the evanescent field generated when excitation light undergoes total internal reflection at the glass-sample interface. This provides exceptional axial selectivity for imaging membrane-associated events such as vesicle fusion and focal adhesions. The lateral image follows standard widefield PSF convolution but with near-zero out-of-focus background. Primary degradations include non-uniform evanescent field and interference fringes from coherent illumination.
Principle
Total Internal Reflection Fluorescence microscopy creates an evanescent wave that penetrates only ~100-200 nm into the sample when the excitation beam is totally internally reflected at the glass-sample interface. This provides excellent optical sectioning of membrane-proximal events (vesicle fusion, protein dynamics at the plasma membrane) with very low background.
How to Build the System
Use a TIRF-capable objective (60-100x, 1.49 NA oil) on an inverted microscope. Launch the laser at the critical angle through the objective periphery (objective-type TIRF) or through a prism (prism-type TIRF). Verify total internal reflection by observing the evanescent field depth with a calibration sample. Cells must be plated on clean, high-RI coverslips (#1.5H, 170 μm).
Common Reconstruction Algorithms
- Single-particle tracking (SPT) algorithms
- Multi-angle TIRF for axial sectioning (variable penetration depth)
- Denoising (Gaussian filtering, wavelet, or deep-learning)
- Photobleaching step analysis for molecular counting
- Temporal median filtering for background subtraction
Common Mistakes
- Laser angle not precisely at TIR, partially exciting bulk fluorescence
- Dirty coverslips causing scattering and destroying evanescent field uniformity
- Cells not well-adhered to the coverslip surface, out of evanescent field range
- Using objectives with NA < 1.45, insufficient for TIR at aqueous interfaces
- Evanescent field depth not calibrated, making quantitative axial analysis unreliable
How to Avoid Mistakes
- Fine-tune the TIR angle while observing a known sample; verify exponential depth decay
- Clean coverslips rigorously (plasma cleaning or acid wash) before plating cells
- Use poly-L-lysine or fibronectin coating to ensure cells adhere to the coverslip
- Use 1.49 NA objectives; 1.45 NA is the minimum for aqueous TIR
- Calibrate evanescent field depth using fluorescent beads at known axial positions
Forward-Model Mismatch Cases
- The widefield fallback illuminates the entire sample depth, but TIRF uses an evanescent wave that penetrates only ~100-200 nm from the coverslip — the fallback includes fluorescence from hundreds of nanometers deeper, adding massive background
- The exponential axial intensity decay of the evanescent field (I(z) = I_0 * exp(-z/d), d~100 nm) is not modeled by the widefield fallback — quantitative axial information (membrane proximity) is lost
How to Correct the Mismatch
- Use the TIRF operator that models evanescent-wave excitation: only fluorophores within ~200 nm of the glass-sample interface contribute signal, with exponentially decaying excitation intensity
- Include the penetration depth d = lambda/(4*pi*sqrt(n1^2*sin^2(theta) - n2^2)) in the forward model; for multi-angle TIRF, model the depth-dependent excitation for each incidence angle
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Axelrod, 'Total internal reflection fluorescence microscopy in cell biology', Traffic 2, 764-774 (2001)
Canonical Datasets
- Cell Tracking Challenge TIRF sequences
- FPbase TIRF imaging examples
Transmission Electron Microscopy
TEM transmits a high-energy electron beam (80-300 keV) through an ultra-thin specimen (<100 nm), magnifying the exit wave with EM lenses. In HRTEM, the image records interference between direct and diffracted beams, convolved by the contrast transfer function (CTF). The CTF introduces oscillating contrast reversals modulated by defocus and spherical aberration. Reconstruction involves CTF correction and, for biological specimens, single-particle averaging.
Transmission Electron Microscopy
Description
TEM transmits a high-energy electron beam (80-300 keV) through an ultra-thin specimen (<100 nm), magnifying the exit wave with EM lenses. In HRTEM, the image records interference between direct and diffracted beams, convolved by the contrast transfer function (CTF). The CTF introduces oscillating contrast reversals modulated by defocus and spherical aberration. Reconstruction involves CTF correction and, for biological specimens, single-particle averaging.
Principle
Transmission Electron Microscopy transmits a high-energy electron beam (80-300 keV) through an ultra-thin specimen (<100 nm). Electrons interact with the sample via elastic scattering (diffraction contrast, phase contrast) and inelastic scattering (energy loss). The transmitted beam is magnified by electromagnetic lenses to form an image with atomic-level resolution (0.05-0.2 nm in aberration-corrected TEMs).
How to Build the System
Operate a TEM (e.g., JEOL JEM-2100, Thermo Fisher Talos/Titan) under high vacuum (< 10⁻⁵ Pa). Prepare ultra-thin specimens using ultramicrotomy (biological), focused ion beam (FIB) milling (materials), or electropolishing (metals). Load samples on 3 mm TEM grids (Cu or Mo). Align the beam, correct condenser and objective astigmatism, and set appropriate defocus for phase contrast imaging. Use direct-electron detectors for highest DQE.
Common Reconstruction Algorithms
- CTF correction (Contrast Transfer Function for phase contrast imaging)
- Single-particle analysis (cryo-EM: classification, 3-D reconstruction)
- Selected-area electron diffraction (SAED) pattern analysis
- HRTEM image simulation (multislice or Bloch wave)
- Deep-learning denoising for low-dose cryo-EM (Topaz, Warp, cryoSPARC)
Common Mistakes
- Specimen too thick, causing multiple scattering and loss of interpretable contrast
- Beam damage to organic or beam-sensitive materials from excessive electron dose
- Astigmatism and coma not corrected, degrading high-resolution images
- Not accounting for CTF effects when interpreting HRTEM images
- Contamination building up on the specimen under the beam (hydrocarbon deposition)
How to Avoid Mistakes
- Prepare specimens to <50 nm thickness; verify with EELS log-ratio thickness mapping
- Use low-dose protocols and cryo-cooling for beam-sensitive specimens
- Perform careful alignment including Zemlin tableau for Cs-corrected instruments
- Simulate TEM images with known structure and compare; always correct CTF in analysis
- Plasma-clean grids and specimens before loading; use a cryo-shield during imaging
Forward-Model Mismatch Cases
- The widefield fallback produces real-valued output, but TEM forms images from coherent electron wave transmission — the complex-valued exit wave (amplitude and phase from elastic scattering) is lost, destroying quantitative phase-contrast information
- TEM image contrast arises from coherent interference of scattered electron waves modulated by the contrast transfer function (CTF) — the widefield intensity-based Gaussian blur cannot model the oscillating CTF that produces Thon rings
How to Correct the Mismatch
- Use the TEM operator that models coherent electron imaging: exit wave convolved with the CTF (including defocus, spherical aberration Cs, partial coherence) producing complex-valued image wave
- Reconstruct phase and amplitude using CTF correction (Wiener filtering in Fourier space), or through-focus series exit-wave reconstruction for aberration-corrected quantitative HRTEM
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Williams & Carter, 'Transmission Electron Microscopy', Springer (2009)
- Haider et al., 'Electron microscopy image enhanced', Nature 392, 768 (1998)
Canonical Datasets
- EMPIAR (Electron Microscopy Public Image Archive)
- NCEM atomic-resolution HRTEM benchmarks
Two-Photon / Multiphoton Microscopy
Two-photon microscopy uses ultrashort pulsed near-infrared laser light (typically 700-1000 nm) to excite fluorophores via simultaneous absorption of two photons, providing intrinsic optical sectioning because excitation only occurs at the focal volume where photon density is sufficiently high. The longer excitation wavelength enables imaging depths of 500-1000 um in scattering tissue (e.g., brain), making it the standard for in vivo neuroscience. The point-spread function is effectively the square of the excitation PSF. Primary degradations include scattering-induced signal loss with depth and wavefront aberrations from tissue inhomogeneity.
Two-Photon / Multiphoton Microscopy
Description
Two-photon microscopy uses ultrashort pulsed near-infrared laser light (typically 700-1000 nm) to excite fluorophores via simultaneous absorption of two photons, providing intrinsic optical sectioning because excitation only occurs at the focal volume where photon density is sufficiently high. The longer excitation wavelength enables imaging depths of 500-1000 um in scattering tissue (e.g., brain), making it the standard for in vivo neuroscience. The point-spread function is effectively the square of the excitation PSF. Primary degradations include scattering-induced signal loss with depth and wavefront aberrations from tissue inhomogeneity.
Principle
Two-photon excitation uses a pulsed near-infrared laser so that two photons are absorbed simultaneously by a fluorophore, producing fluorescence equivalent to a single photon of half the wavelength. Because absorption depends on the square of intensity, fluorescence is generated only at the tight focus, providing intrinsic optical sectioning without a pinhole. Deep tissue penetration (up to ~1 mm) is achieved due to reduced scattering at NIR wavelengths.
How to Build the System
Install a mode-locked Ti:Sapphire laser (680-1080 nm, ~100 fs pulses, 80 MHz, Coherent Chameleon or Spectra-Physics InSight) on a laser-scanning microscope. Use a high-NA water-dipping objective (25x 1.05 NA or 20x 1.0 NA) for deep imaging. Non-descanned detectors (GaAsP PMTs) collect scattered fluorescence close to the objective for maximum efficiency. Add a Pockels cell for fast power modulation.
Common Reconstruction Algorithms
- Adaptive background subtraction for in-depth imaging
- Motion correction and image registration for in-vivo data
- Suite2p / CaImAn (calcium imaging segmentation and trace extraction)
- Deep-learning denoising (DeepInterpolation, Noise2Void)
- Attenuation compensation (exponential depth correction)
Common Mistakes
- Excessive laser power causing photodamage and heating deep in tissue
- Pre-chirp not compensated, broadening pulses and reducing two-photon efficiency
- Crosstalk between emission channels when using multiple fluorophores
- Brain motion artifacts in in-vivo imaging not corrected
- Imaging too deep without correcting for signal attenuation with depth
How to Avoid Mistakes
- Titrate laser power to minimum effective level; monitor for tissue damage signs
- Use a prism-pair or grating pre-chirp compressor to maintain short pulses at the focus
- Select well-separated emission spectra and use appropriate dichroics and filters
- Apply real-time or post-hoc motion correction algorithms (rigid or non-rigid)
- Use adaptive optics or longer-wavelength excitation (three-photon) for deep tissue
Forward-Model Mismatch Cases
- The widefield fallback uses a linear Gaussian PSF, but two-photon excitation depends on intensity squared (I^2), producing a much tighter effective PSF — the fallback PSF is 40-60% wider than the true two-photon PSF
- The widefield model applies uniform illumination, but two-photon intrinsically provides optical sectioning (only the focal volume has sufficient intensity for I^2 absorption) — the out-of-focus background model is fundamentally wrong
How to Correct the Mismatch
- Use the two-photon operator with the squared PSF: effective_PSF = PSF_excitation^2, which is ~1.4x narrower than the single-photon PSF
- Model the nonlinear excitation correctly; for deep tissue, include scattering-induced PSF broadening and signal attenuation with depth
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Denk et al., 'Two-photon laser scanning fluorescence microscopy', Science 248, 73-76 (1990)
- Helmchen & Denk, 'Deep tissue two-photon microscopy', Nature Methods 2, 932-940 (2005)
Canonical Datasets
- Allen Brain Observatory two-photon calcium imaging
- Stringer et al. (2019) mouse V1 two-photon dataset
Ultrasonic Phased Array (TFM/FMC)
Ultrasonic Phased Array (TFM/FMC)
Ultrasound Imaging
Ultrasound imaging forms images by transmitting acoustic pulses into tissue and recording echoes reflected from impedance boundaries. In ultrafast plane-wave imaging, unfocused plane waves at multiple steering angles are transmitted and the received channel data are coherently compounded using delay-and-sum (DAS) beamforming. The forward model is governed by the acoustic wave equation with tissue-dependent speed of sound and attenuation. Primary degradations include speckle noise (coherent interference), limited bandwidth, and aberration from heterogeneous tissue.
Ultrasound Imaging
Description
Ultrasound imaging forms images by transmitting acoustic pulses into tissue and recording echoes reflected from impedance boundaries. In ultrafast plane-wave imaging, unfocused plane waves at multiple steering angles are transmitted and the received channel data are coherently compounded using delay-and-sum (DAS) beamforming. The forward model is governed by the acoustic wave equation with tissue-dependent speed of sound and attenuation. Primary degradations include speckle noise (coherent interference), limited bandwidth, and aberration from heterogeneous tissue.
Principle
Medical ultrasound imaging transmits short pulses of high-frequency sound waves (1-20 MHz) into tissue and detects the echoes reflected from acoustic impedance boundaries. The time delay of each echo determines the reflector depth, and beamforming focuses the transmitted and received beams to form a 2-D cross-sectional image. Spatial resolution improves with frequency but penetration depth decreases.
How to Build the System
A clinical ultrasound system consists of a multi-element transducer array (linear 7-15 MHz for superficial, curvilinear 2-5 MHz for abdominal, phased array 1-5 MHz for cardiac) connected to a beamformer and image processor. Modern systems use 128-192 element arrays with digital beamforming. Apply acoustic coupling gel between transducer and skin. Adjust gain, depth, focus, and frequency for the specific examination.
Common Reconstruction Algorithms
- Delay-and-sum (DAS) beamforming
- Adaptive beamforming (Capon, MVDR) for improved resolution
- Synthetic aperture focusing (SAFT)
- Plane-wave compounding for ultrafast imaging
- Deep-learning beamforming and speckle reduction
Common Mistakes
- Incorrect transducer selection (frequency too high for deep structures or too low for superficial)
- Poor acoustic coupling (air gaps) causing signal dropout
- Gain set too high, saturating the image and masking pathology
- Acoustic shadowing behind highly reflective structures misinterpreted as pathology
- Not adjusting focus zone depth to the region of interest
How to Avoid Mistakes
- Select transducer frequency appropriate for the imaging depth required
- Apply generous coupling gel and maintain constant contact pressure
- Adjust TGC (time-gain compensation) curve for uniform brightness with depth
- Recognize and account for acoustic artifacts (shadowing, enhancement, reverberation)
- Set the transmit focal zone at the depth of the target structure
Forward-Model Mismatch Cases
- The widefield fallback produces a 2D (64,64) image, but ultrasound acquires RF channel data of shape (n_depths, n_channels) from each transducer element — output shape (32,128) vs (64,64) makes beamforming algorithms incompatible
- Ultrasound imaging involves wave propagation, reflection at tissue interfaces, and time-of-flight encoding — the widefield Gaussian blur has no relationship to acoustic wave physics (speed of sound, impedance mismatch, attenuation)
How to Correct the Mismatch
- Use the ultrasound operator that models acoustic pulse transmission, tissue reflection, and per-element receive: each channel records the time-domain echo signal from scatterers at different depths
- Reconstruct B-mode images using delay-and-sum beamforming or adaptive beamforming (MVDR, coherence factor) that require the correct RF channel data format and speed-of-sound model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Montaldo et al., 'Coherent plane-wave compounding for very high frame rate ultrasonography', IEEE TUFFC 56, 489-506 (2009)
- Liebgott et al., 'PICMUS: Plane-wave Imaging Challenge in Medical Ultrasound', IEEE IUS 2016
Canonical Datasets
- PICMUS Challenge (plane-wave ultrasound)
- CUBDL (deep learning ultrasound beamforming)
US/MRI Fusion
US/MRI Fusion
Weather / Doppler Radar
Weather / Doppler Radar
Wide-Angle X-ray Scattering (WAXS)
Wide-Angle X-ray Scattering (WAXS)
Widefield Fluorescence Microscopy
Standard widefield epi-fluorescence microscopy where the entire field of view is illuminated simultaneously and the image is formed by convolution of the specimen fluorescence distribution with the system point spread function (PSF). Out-of-focus blur from planes above and below the focal plane is the primary degradation. The forward model is y = PSF ** x + n, where ** denotes convolution and n is mixed Poisson-Gaussian noise. Deconvolution via Richardson-Lucy or learned priors (CARE) restores resolution toward the diffraction limit.
Widefield Fluorescence Microscopy
Description
Standard widefield epi-fluorescence microscopy where the entire field of view is illuminated simultaneously and the image is formed by convolution of the specimen fluorescence distribution with the system point spread function (PSF). Out-of-focus blur from planes above and below the focal plane is the primary degradation. The forward model is y = PSF ** x + n, where ** denotes convolution and n is mixed Poisson-Gaussian noise. Deconvolution via Richardson-Lucy or learned priors (CARE) restores resolution toward the diffraction limit.
Principle
The entire specimen is illuminated uniformly and fluorescence from all planes is collected simultaneously. The image is the convolution of the 3-D fluorescence distribution with the microscope point-spread function (PSF), dominated by out-of-focus blur from planes above and below the focal plane.
How to Build the System
Mount an infinity-corrected high-NA objective (≥1.3 NA oil) on an inverted body (Nikon Ti2 or Zeiss Observer). Install a multi-band LED engine (e.g., Lumencor SPECTRA X) coupled through a liquid light guide. Select matched excitation/dichroic/emission filter sets. Focus Köhler illumination for flat-field. Attach an sCMOS camera (Hamamatsu Flash4 or Photometrics Prime BSI) at the side port. Calibrate pixel size with a stage micrometer.
Common Reconstruction Algorithms
- Richardson-Lucy deconvolution
- Wiener filtering
- CARE (Content-Aware image REstoration) deep-learning deconvolution
- Total-variation regularized deconvolution
- Blind deconvolution (PSF estimation + image update)
Common Mistakes
- Using an incorrect or measured PSF with wrong refractive-index setting
- Ignoring flatfield non-uniformity, leading to intensity shading
- Over-iterating Richardson-Lucy causing noise amplification
- Mismatched immersion medium vs. coverslip thickness causing spherical aberration
- Not correcting for photobleaching across a time-lapse series
How to Avoid Mistakes
- Measure the PSF with sub-diffraction beads at the same coverslip/medium as the sample
- Acquire and apply a flatfield correction image before deconvolution
- Use regularization or early stopping (monitor residual) in iterative deconvolution
- Match immersion oil RI to the coverslip and mounting medium specifications
- Normalize intensity per frame or use photobleaching-corrected models
Forward-Model Mismatch Cases
- No forward-model mismatch: the widefield Gaussian blur IS the correct operator for this modality (sigma=2.0 PSF convolution)
- Minor mismatch may arise if the actual microscope PSF differs from the default Gaussian (e.g., measured PSF with aberrations)
How to Correct the Mismatch
- The default widefield operator is already correct; no correction needed
- For higher fidelity, replace the Gaussian PSF with a measured or Born & Wolf PSF model matching the actual objective NA and wavelength
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Richardson, 'Bayesian-based iterative method of image restoration', J. Opt. Soc. Am. 62, 55-59 (1972)
- Weigert et al., 'Content-aware image restoration (CARE)', Nature Methods 15, 1090-1097 (2018)
Canonical Datasets
- BioSR (Zhang et al., Nature Methods 2023)
- Hagen et al. widefield deconvolution benchmark
X-ray Angiography
Digital subtraction angiography (DSA) visualizes blood vessels by subtracting a pre-contrast mask image from post-contrast images acquired after injecting iodinated contrast agent. The subtraction eliminates static anatomy, isolating vascular structures. The forward model is y_post - y_pre = Delta_mu * t_vessel + n where Delta_mu is the attenuation increase from iodine. Primary challenges include patient motion between mask and contrast frames, breathing artifacts, and superposition of overlapping vessels.
X-ray Angiography
Description
Digital subtraction angiography (DSA) visualizes blood vessels by subtracting a pre-contrast mask image from post-contrast images acquired after injecting iodinated contrast agent. The subtraction eliminates static anatomy, isolating vascular structures. The forward model is y_post - y_pre = Delta_mu * t_vessel + n where Delta_mu is the attenuation increase from iodine. Primary challenges include patient motion between mask and contrast frames, breathing artifacts, and superposition of overlapping vessels.
Principle
X-ray angiography visualizes blood vessels by injecting iodinated contrast agent and acquiring rapid-sequence fluoroscopic images. Digital Subtraction Angiography (DSA) subtracts a pre-contrast mask image from post-contrast frames, removing bone and soft tissue to show only the contrast-filled vasculature with high contrast and spatial resolution.
How to Build the System
Use a biplane or single-plane angiography suite with high-speed flat-panel detectors (30-60 fps capability). The C-arm provides multi-angle positioning. Power injector delivers iodinated contrast (350-370 mgI/mL) at controlled rates. Road-mapping mode overlays vessel map on live fluoro for catheter guidance. 3-D rotational angiography acquires a spin to reconstruct a volume of the vasculature.
Common Reconstruction Algorithms
- Digital subtraction (mask-live image subtraction)
- Pixel shifting for motion compensation in DSA
- 3-D rotational angiography reconstruction (FDK or iterative)
- Time-density curve analysis for perfusion assessment
- Deep-learning vessel segmentation and stenosis quantification
Common Mistakes
- Patient motion between mask and contrast frames causing misregistration artifacts
- Inadequate contrast bolus timing causing suboptimal vessel opacification
- Overexposure or underexposure of the detector outside the linear range
- Bowel gas or cardiac motion causing subtraction artifacts
- Injecting contrast too fast, creating reflux or missing distal vessels
How to Avoid Mistakes
- Instruct patients to remain still; use pixel shifting or elastic registration
- Use test bolus or timing run to determine optimal injection-to-imaging delay
- Use automatic dose rate control; verify detector within calibrated dynamic range
- Use cardiac gating for coronary or thoracic angiography
- Adjust injection rate and volume to vessel size and flow characteristics
Forward-Model Mismatch Cases
- The widefield fallback applies Gaussian blur, but angiography uses X-ray transmission with iodine contrast agent — the exponential attenuation model with contrast-enhanced vessels is not a simple convolution
- Digital subtraction angiography (DSA) requires temporal subtraction between pre- and post-contrast images to isolate vessels — the widefield model has no temporal component and cannot model contrast dynamics
How to Correct the Mismatch
- Use the angiography operator implementing contrast-enhanced X-ray transmission: y = I_0 * exp(-(mu_tissue*t + mu_iodine*c(t))) where c(t) models contrast agent concentration dynamics
- Apply temporal subtraction (post-contrast minus pre-contrast) or parametric mapping of contrast kinetics using the correct time-resolved forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Defined by clinical DSA standards (ACC/AHA guidelines)
Canonical Datasets
- IntrA (intracranial aneurysm 3DRA dataset)
X-ray Computed Tomography
X-ray CT reconstructs cross-sectional images from a set of line-integral projections (sinogram) acquired as an X-ray source and detector array rotate around the patient. The forward model is the Radon transform: y = R*x + n where R computes line integrals along each ray. Sparse-view and low-dose protocols reduce radiation but introduce streak artifacts and noise. Reconstruction uses filtered back-projection (FBP) or iterative methods (MBIR, DL post-processing).
X-ray Computed Tomography
Description
X-ray CT reconstructs cross-sectional images from a set of line-integral projections (sinogram) acquired as an X-ray source and detector array rotate around the patient. The forward model is the Radon transform: y = R*x + n where R computes line integrals along each ray. Sparse-view and low-dose protocols reduce radiation but introduce streak artifacts and noise. Reconstruction uses filtered back-projection (FBP) or iterative methods (MBIR, DL post-processing).
Principle
X-ray Computed Tomography reconstructs cross-sectional images from multiple X-ray projection measurements acquired at different angles around the patient. The Beer-Lambert law governs X-ray attenuation: I = I₀ exp(-∫μ(x,y) dl), and the Radon transform relates projections to the attenuation map. Filtered back-projection or iterative algorithms invert the Radon transform to produce volumetric images.
How to Build the System
A clinical CT scanner consists of a rotating gantry with an X-ray tube (80-140 kVp, 50-800 mA) and a curved detector array (64-320 rows of scintillator-photodiode elements) on opposing sides. The gantry rotates at 0.25-0.5 s per revolution. Helical scanning moves the patient table continuously through the gantry. Key calibrations: air scans, detector gain normalization, beam-hardening correction LUTs, and geometric calibration.
Common Reconstruction Algorithms
- Filtered back-projection (FBP) with Ram-Lak or Shepp-Logan filter
- FDK (Feldkamp-Davis-Kress) for cone-beam geometry
- Iterative reconstruction: SART, OS-SIRT
- Model-based iterative reconstruction (MBIR) with statistical noise model
- Deep-learning reconstruction (FBPConvNet, LEARN, WGAN-VGG for low-dose CT)
Common Mistakes
- Ring artifacts from uncorrected detector gain variations
- Beam-hardening artifacts (cupping, streaks near bone/metal) not corrected
- Patient motion during scan causing blurring and streaks
- Insufficient angular sampling producing streak or aliasing artifacts
- Metal artifacts from implants overwhelming reconstruction algorithms
How to Avoid Mistakes
- Perform regular air calibrations and detector flatfield corrections
- Apply polynomial beam-hardening correction or dual-energy decomposition
- Use gating (cardiac/respiratory) or fast rotation to reduce motion artifacts
- Ensure adequate number of projections (≥ π × detector columns for FBP)
- Use metal artifact reduction algorithms (MAR, iterative forward-projection inpainting)
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred (64,64) image, but CT acquires a sinogram of shape (180,64) via the Radon transform (line integrals at multiple angles) — any reconstruction algorithm expecting sinogram input will crash
- The Gaussian blur preserves spatial structure, but the Radon transform converts spatial information into angular projections — the fallback output bears no physical relationship to X-ray transmission measurements
How to Correct the Mismatch
- Use the CT operator implementing the discrete Radon transform: y(theta,s) = integral of f(x,y) along line at angle theta and offset s, producing a (n_angles, n_detectors) sinogram
- Reconstruct using filtered back-projection (FBP) or iterative algorithms (SART, ADMM-TV) that require the correct Radon transform / back-projection pair
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Feldkamp et al., 'Practical cone-beam algorithm', J. Opt. Soc. Am. A 1, 612-619 (1984)
- Leuschner et al., 'LoDoPaB-CT, a benchmark dataset for low-dose CT reconstruction', Scientific Data 8, 109 (2021)
Canonical Datasets
- LoDoPaB-CT (Scientific Data 2021)
- DeepLesion (NIH Clinical Center)
- AAPM Low-Dose CT Grand Challenge
X-ray Crystallography
X-ray Crystallography
X-ray Fluorescence (XRF) Imaging
X-ray Fluorescence (XRF) Imaging
X-ray Fluorescence Tomography
X-ray Fluorescence Tomography
X-ray NDT (Radiography)
X-ray NDT (Radiography)
X-ray Radiography
Digital X-ray radiography produces a 2D projection image by transmitting X-rays through the body onto a flat-panel detector. The forward model follows Beer-Lambert attenuation: y = I_0 * exp(-integral(mu(s) ds)) + n where mu is the linear attenuation coefficient along each ray. The image is a superposition of all structures along the beam path. Primary degradations include quantum noise (Poisson), scatter, and geometric magnification artifacts.
X-ray Radiography
Description
Digital X-ray radiography produces a 2D projection image by transmitting X-rays through the body onto a flat-panel detector. The forward model follows Beer-Lambert attenuation: y = I_0 * exp(-integral(mu(s) ds)) + n where mu is the linear attenuation coefficient along each ray. The image is a superposition of all structures along the beam path. Primary degradations include quantum noise (Poisson), scatter, and geometric magnification artifacts.
Principle
X-ray radiography produces a 2-D projection image of the patient's internal structures by measuring the transmitted X-ray intensity after passing through the body. Dense structures (bone, metal) attenuate more X-rays and appear bright on the detector. The image represents the line-integral of the attenuation coefficient along each ray path.
How to Build the System
An X-ray tube (stationary or rotating anode, 40-150 kVp) produces a divergent beam. The patient stands or lies between the tube and a flat-panel detector (amorphous silicon with CsI scintillator, or amorphous selenium for direct conversion). Anti-scatter grid (Bucky grid) is placed before the detector. Automatic exposure control (AEC) sets mAs based on patient thickness. Calibration includes dark field, flatfield, and defective pixel mapping.
Common Reconstruction Algorithms
- Flat-field correction (gain/offset normalization)
- Logarithmic transform for linear attenuation mapping
- Anti-scatter grid artifact removal
- Dual-energy subtraction (bone/soft-tissue separation)
- Deep-learning denoising for low-dose radiography
Common Mistakes
- Under-exposure causing excessive quantum noise, especially in obese patients
- Grid artifacts from misaligned anti-scatter grid
- Patient motion blur in long-exposure radiographs
- Incorrect windowing (display LUT) obscuring diagnostic information
- Scatter radiation degrading image contrast in thick body parts
How to Avoid Mistakes
- Use AEC and verify exposure indicator falls within acceptable range
- Ensure grid is properly aligned with the X-ray focal spot distance
- Use shortest possible exposure time; instruct patient to hold breath
- Apply appropriate DICOM windowing presets for the anatomical region
- Use an appropriate anti-scatter grid ratio (8:1 to 12:1) for thick body parts
Forward-Model Mismatch Cases
- The widefield fallback applies additive Gaussian blur, but X-ray radiography follows Beer-Lambert attenuation: I = I_0 * exp(-integral(mu(x,y,z) dz)) — the exponential transmission model is fundamentally different from linear convolution
- The Gaussian blur preserves mean intensity, but X-ray attenuation reduces intensity exponentially with material thickness and density — the fallback cannot model absorption contrast, bone/soft-tissue differentiation, or scatter
How to Correct the Mismatch
- Use the X-ray radiography operator implementing Beer-Lambert transmission: y = I_0 * exp(-A*x) + scatter + noise, where A is the projection matrix along the beam direction
- Include scatter rejection (anti-scatter grid model), detector response (DQE), and quantum noise (Poisson statistics) for physically accurate forward modeling
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Irvin et al., 'CheXpert: A large chest radiograph dataset', AAAI 2019
- Wang et al., 'ChestX-ray8: Hospital-scale chest X-ray database', CVPR 2017
Canonical Datasets
- CheXpert (Stanford, 224K studies)
- MIMIC-CXR (MIT/BIDMC, 377K images)
- NIH ChestX-ray14 (112K images)