Physics World Model — Modality Catalog
25 imaging modalities with descriptions, experimental setups, and reconstruction guidance.
Confocal 3D Z-Stack
Three-dimensional confocal imaging by acquiring a z-stack of optical sections. Each slice is convolved with the 3D confocal PSF. The anisotropic PSF (axial resolution ~3x worse than lateral) is a key challenge. 3D Richardson-Lucy or CARE-3D are used for volumetric deconvolution. The forward model is y(x,y,z) = PSF_3d *** x(x,y,z) + n where *** denotes 3D convolution.
Confocal 3D Z-Stack
Description
Three-dimensional confocal imaging by acquiring a z-stack of optical sections. Each slice is convolved with the 3D confocal PSF. The anisotropic PSF (axial resolution ~3x worse than lateral) is a key challenge. 3D Richardson-Lucy or CARE-3D are used for volumetric deconvolution. The forward model is y(x,y,z) = PSF_3d *** x(x,y,z) + n where *** denotes 3D convolution.
Principle
Same confocal principle as live-cell mode but acquiring a full z-stack by stepping the objective or sample through the focal plane. Each optical section is convolved with the 3-D confocal PSF, and the full volume is reconstructed by 3-D deconvolution to recover isotropic resolution.
How to Build the System
Use a high-NA objective (60-100x, 1.4 NA oil or 1.2 NA water) with a piezo z-stage for precise, repeatable z-steps (typ. 200-300 nm). Acquire z-stacks covering the specimen thickness with Nyquist z-sampling. For fixed samples, oil immersion is preferred; for thick tissue, use silicone oil or glycerol objectives to minimize RI mismatch deep in the sample.
Common Reconstruction Algorithms
- 3-D Richardson-Lucy deconvolution
- 3-D Wiener / Tikhonov deconvolution
- Huygens Professional iterative deconvolution
- DeconvolutionLab2 (GPU-accelerated 3-D)
- Deep-learning volumetric restoration (3-D U-Net, RCAN3D)
Common Mistakes
- Using z-step larger than Nyquist, causing axial aliasing
- Depth-dependent spherical aberration from RI mismatch not corrected
- Not accounting for signal attenuation deeper in the sample
- Applying 2-D deconvolution slice-by-slice instead of full 3-D
- Incorrect PSF model (2-D Gaussian instead of 3-D Born & Wolf model)
How to Avoid Mistakes
- Calculate Nyquist z-step (λ / (4·n·(1-cos α))) and sample accordingly
- Use depth-dependent PSF models or adaptive optics for thick specimens
- Apply intensity normalization per z-slice before deconvolution
- Always perform true 3-D deconvolution to preserve axial information
- Use measured 3-D PSF from sub-diffraction beads embedded at the correct depth
Forward-Model Mismatch Cases
- The widefield fallback processes only 2D (64,64) images, but confocal 3D requires volumetric input (32,64,64) — the entire z-stack is discarded, losing all axial information
- Applying 2D deconvolution slice-by-slice instead of true 3D deconvolution produces incorrect axial resolution and misses inter-slice correlations from the 3D PSF
How to Correct the Mismatch
- Use the 3D confocal operator that processes full z-stack volumes with the anisotropic 3D PSF (worse axial than lateral resolution)
- Perform true 3D deconvolution using the measured or modeled 3D confocal PSF; never decompose a z-stack into independent 2D slices
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- McNally et al., 'Three-dimensional imaging by deconvolution microscopy', Methods 23, 210-217 (1999)
- Weigert et al., 'Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks', MICCAI 2017
Canonical Datasets
- Planaria 3D confocal dataset (Weigert et al.)
- BioSR confocal 3D subset
Confocal Live-Cell Microscopy
Laser scanning confocal microscopy for live-cell imaging. A focused laser scans the specimen point by point, and a pinhole rejects out-of-focus light. The image formation is modelled as convolution with the confocal PSF (product of excitation and detection PSFs). Fast acquisition rates for live cells often sacrifice SNR due to short pixel dwell times. Reconstruction involves deconvolution with the confocal PSF and temporal denoising across frames.
Confocal Live-Cell Microscopy
Description
Laser scanning confocal microscopy for live-cell imaging. A focused laser scans the specimen point by point, and a pinhole rejects out-of-focus light. The image formation is modelled as convolution with the confocal PSF (product of excitation and detection PSFs). Fast acquisition rates for live cells often sacrifice SNR due to short pixel dwell times. Reconstruction involves deconvolution with the confocal PSF and temporal denoising across frames.
Principle
A focused laser spot is scanned across the specimen and a pinhole in front of the detector rejects out-of-focus fluorescence, providing optical sectioning. The image formation is modeled as a point-by-point convolution with the confocal PSF (product of excitation and detection PSFs). For live-cell work, speed and gentleness are prioritized.
How to Build the System
Equip a laser-scanning confocal head (e.g., Nikon A1R, Zeiss LSM 980 Airyscan) on an inverted microscope with an environmental enclosure. Use a resonant scanner for fast (30 fps) imaging. Set pinhole to 1 Airy unit for best sectioning or open slightly (1.2 AU) for more signal. Use 40-60x water-immersion objectives for live cells to match RI of aqueous media.
Common Reconstruction Algorithms
- Airyscan joint deconvolution (Zeiss)
- Richardson-Lucy with measured confocal PSF
- Sparse deconvolution (Hessian regularization)
- Deep-learning denoising (Noise2Fast, DnCNN)
- Pixel reassignment (ISM) for resolution doubling
Common Mistakes
- Setting pinhole too small, drastically reducing signal in live cells
- Scanning too slowly, causing phototoxicity and photobleaching
- Using oil-immersion objectives for aqueous samples, introducing spherical aberration
- Ignoring chromatic aberration when imaging multiple channels simultaneously
- Oversampling (too many pixels) leading to excessive total dose with no resolution gain
How to Avoid Mistakes
- Match pinhole to 1 AU and use resonant scanning + frame averaging for speed
- Minimize pixel dwell time and total exposure; use sensitive GaAsP detectors
- Select water-immersion objectives for live aqueous samples
- Calibrate chromatic offsets with multi-color beads and apply corrections
- Follow Nyquist sampling (pixel size ~ 0.4× resolution limit); avoid oversampling
Forward-Model Mismatch Cases
- The widefield fallback uses sigma=2.0, but confocal PSF is sharper (sigma~1.2-1.5) due to the pinhole rejecting out-of-focus light — the fallback over-blurs by 30-60%, destroying resolvable features
- Confocal provides optical sectioning (only in-focus plane contributes signal), while widefield collects fluorescence from all planes — reconstructions using widefield PSF will have incorrect out-of-focus model
How to Correct the Mismatch
- Use the confocal operator with the correct PSF (product of excitation and detection PSFs, effective sigma~1.2-1.5) matching the pinhole size and objective NA
- Model the confocal sectioning effect explicitly; for live-cell work, use the confocal PSF that accounts for pinhole size (1 Airy unit) and emission wavelength
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Minsky, 'Memoir on inventing the confocal microscope', Scanning 10, 128-138 (1988)
- McNally et al., 'Three-dimensional imaging by deconvolution microscopy', Methods 23, 210-217 (1999)
Canonical Datasets
- Cell Tracking Challenge confocal sequences
- BioSR confocal subset
Dark-Field Microscopy
Dark-Field Microscopy
Differential Interference Contrast (DIC)
Differential Interference Contrast (DIC)
DNA-PAINT Super-Resolution
DNA-PAINT Super-Resolution
Expansion Microscopy (ExM)
Expansion Microscopy (ExM)
Fluorescence Lifetime Imaging
Fluorescence lifetime imaging microscopy (FLIM) measures the exponential decay time of fluorescence emission at each pixel, providing contrast based on the molecular environment rather than intensity alone. In time-correlated single-photon counting (TCSPC), each detected photon is time-tagged relative to the excitation pulse, building a histogram of arrival times that is fitted to single- or multi-exponential decay models. The phasor approach provides a fit-free analysis in Fourier space. Primary challenges include low photon counts and instrument response function (IRF) deconvolution.
Fluorescence Lifetime Imaging
Description
Fluorescence lifetime imaging microscopy (FLIM) measures the exponential decay time of fluorescence emission at each pixel, providing contrast based on the molecular environment rather than intensity alone. In time-correlated single-photon counting (TCSPC), each detected photon is time-tagged relative to the excitation pulse, building a histogram of arrival times that is fitted to single- or multi-exponential decay models. The phasor approach provides a fit-free analysis in Fourier space. Primary challenges include low photon counts and instrument response function (IRF) deconvolution.
Principle
Fluorescence Lifetime Imaging measures the exponential decay time of fluorophore emission (typically 1-10 ns) rather than intensity. Lifetime is sensitive to the fluorophore's local chemical environment (pH, ion concentration, FRET) but independent of concentration and photobleaching. Detection uses either time-correlated single-photon counting (TCSPC) or frequency-domain phase/modulation methods.
How to Build the System
Add a pulsed laser source (ps diode laser or Ti:Sapphire, 40-80 MHz repetition rate) to a confocal or widefield microscope. For TCSPC, install single-photon counting detectors (hybrid PMTs or SPADs) with timing electronics (Becker & Hickl SPC-150/830 or PicoQuant TimeHarp). For widefield FLIM, use a gated or modulated camera (Lambert Instruments). Synchronize laser pulses with detector timing.
Common Reconstruction Algorithms
- Mono-exponential / bi-exponential tail fitting (least-squares or MLE)
- Phasor analysis (model-free lifetime decomposition)
- Global analysis (linked lifetime fitting across pixels)
- Bayesian lifetime estimation
- Deep-learning FLIM (FLIMnet, rapid lifetime prediction from few photons)
Common Mistakes
- Insufficient photon counts for reliable lifetime fitting (need ≥1000 photons/pixel)
- Ignoring instrument response function (IRF) convolution in the fit
- Using mono-exponential fit for multi-component decays, obtaining incorrect average lifetimes
- Pile-up effect at high count rates distorting the decay histogram
- Background autofluorescence contributing a long-lifetime component
How to Avoid Mistakes
- Collect sufficient photons; use longer acquisition or binning if needed
- Measure IRF with a scattering sample and convolve with the model in fitting
- Evaluate fit residuals; use bi-exponential or phasor if mono-exponential is poor
- Keep count rate below 1-5 % of the laser repetition rate to avoid pile-up
- Measure autofluorescence lifetime separately and include in the fit model
Forward-Model Mismatch Cases
- The widefield fallback produces a single 2D intensity image (64,64), but FLIM measures fluorescence lifetime decay at each pixel — output shape (64,64,64) includes the temporal decay dimension
- FLIM forward model is nonlinear (exponential decay convolved with IRF: y(t) = IRF * sum(a_i * exp(-t/tau_i))), while the widefield linear blur cannot represent lifetime information at all
How to Correct the Mismatch
- Use the FLIM operator that generates time-resolved fluorescence decay histograms at each pixel, including IRF convolution and multi-exponential decay components
- Reconstruct lifetimes using phasor analysis or exponential fitting on the temporal dimension; the correct forward model preserves the relationship between decay time and local chemical environment
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Becker, 'Advanced Time-Correlated Single Photon Counting Techniques', Springer (2005)
- Digman et al., 'The phasor approach to fluorescence lifetime imaging', Biophysical Journal 94, L14-L16 (2008)
Canonical Datasets
- FLIM-FRET standard sample datasets (Becker & Hickl)
- FLIM phasor benchmark (Digman lab)
Fourier Ptychographic Microscopy
Fourier ptychographic microscopy (FPM) achieves a high space-bandwidth product by illuminating the sample from multiple angles using an LED array, capturing a set of low-resolution images, and computationally stitching them in Fourier space to synthesize a high-NA image with both amplitude and phase. Each LED angle shifts the sample's spatial frequency spectrum in Fourier space, and overlapping spectral regions provide redundancy for phase retrieval. The synthetic NA equals the objective NA plus the illumination NA. Reconstruction uses iterative phase retrieval algorithms (sequential or gradient-based).
Fourier Ptychographic Microscopy
Description
Fourier ptychographic microscopy (FPM) achieves a high space-bandwidth product by illuminating the sample from multiple angles using an LED array, capturing a set of low-resolution images, and computationally stitching them in Fourier space to synthesize a high-NA image with both amplitude and phase. Each LED angle shifts the sample's spatial frequency spectrum in Fourier space, and overlapping spectral regions provide redundancy for phase retrieval. The synthetic NA equals the objective NA plus the illumination NA. Reconstruction uses iterative phase retrieval algorithms (sequential or gradient-based).
Principle
Fourier Ptychographic Microscopy synthetically increases the NA of a low-magnification objective by illuminating the sample from multiple angles (LED array) and computationally stitching together the resulting images in Fourier space. Each LED angle shifts the sample spectrum so different spatial-frequency bands enter the objective pupil, allowing recovery of both amplitude and phase at high resolution over a large field of view.
How to Build the System
Replace the microscope condenser with a programmable LED matrix (e.g., 32×32 RGB LED array, ~4 mm pitch, placed ~80 mm above the sample). Use a low-magnification objective (4-10×, 0.1-0.3 NA) for large FOV. Acquire one image per LED (typically 100-300 images for the full matrix). Precise knowledge of LED positions is required for Fourier-space stitching.
Common Reconstruction Algorithms
- Alternating projection (Gerchberg-Saxton style in Fourier space)
- Embedded pupil function recovery (joint sample + aberration estimation)
- Wirtinger gradient descent with total-variation regularization
- Neural network-accelerated FPM (learned initialization + refinement)
- Multiplexed FPM (multiple LEDs simultaneously for faster acquisition)
Common Mistakes
- Inaccurate LED position calibration causing ghosting and resolution loss
- Insufficient overlap between Fourier-space patches (need ≥60 % overlap)
- Ignoring pupil aberrations of the low-NA objective
- LED intensity non-uniformity not corrected across the array
- Vibration or sample drift between sequential LED acquisitions
How to Avoid Mistakes
- Calibrate LED positions using a self-calibration algorithm or known test target
- Ensure adequate angular spacing to maintain >60% Fourier overlap between adjacent LEDs
- Use embedded pupil recovery to jointly estimate and correct aberrations
- Normalize LED intensities with a blank-sample calibration acquisition
- Stabilize the setup mechanically; use fast cameras to minimize inter-frame drift
Forward-Model Mismatch Cases
- The widefield fallback produces a single (64,64) image, but FPM acquires 25+ images from different LED illumination angles — output shape (25,16,16) captures distinct spatial-frequency bands for each angle
- FPM is fundamentally nonlinear (intensity = |F^-1{P * F{O * exp(i*k_led*r)}}|^2) — the widefield linear blur cannot model the coherent pupil filtering and phase recovery that enables synthetic aperture
How to Correct the Mismatch
- Use the FPM operator that generates one low-resolution intensity image per LED angle, each capturing a different region of the sample's Fourier spectrum shifted by the illumination wavevector
- Reconstruct using alternating projection (Gerchberg-Saxton in Fourier space) or embedded pupil recovery, which require the correct coherent forward model with known LED positions
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Zheng et al., 'Wide-field, high-resolution Fourier ptychographic microscopy', Nature Photonics 7, 739-745 (2013)
- Tian & Waller, 'Quantitative differential phase contrast imaging in an LED array microscope', Optics Express 23, 11394-11403 (2015)
Canonical Datasets
- Zheng lab FPM datasets (UCONN)
- Waller lab FPM benchmark data (Berkeley)
Image Scanning Microscopy (ISM)
Image Scanning Microscopy (ISM)
Lattice Light-Sheet Microscopy
Lattice Light-Sheet Microscopy
Lensless (Diffuser Camera) Imaging
Lensless imaging replaces the objective lens with a thin optical element (phase diffuser or coded mask) placed directly near the sensor. Scene light produces a multiplexed caustic pattern encoding the entire scene. The forward model is y = H * x + n where H is determined by the mask's phase profile and mask-to-sensor distance. Each scene point contributes across many sensor pixels, yielding a multiplexing advantage. Reconstruction solves a large-scale inverse problem via ADMM or FISTA with total-variation or learned priors.
Lensless (Diffuser Camera) Imaging
Description
Lensless imaging replaces the objective lens with a thin optical element (phase diffuser or coded mask) placed directly near the sensor. Scene light produces a multiplexed caustic pattern encoding the entire scene. The forward model is y = H * x + n where H is determined by the mask's phase profile and mask-to-sensor distance. Each scene point contributes across many sensor pixels, yielding a multiplexing advantage. Reconstruction solves a large-scale inverse problem via ADMM or FISTA with total-variation or learned priors.
Principle
Lensless (diffuser-cam) imaging replaces the imaging lens with a thin diffuser or coded mask placed directly before the sensor. The sensor records a multiplexed pattern (caustic or speckle) that encodes the 3-D scene. Computational reconstruction inverts the known point-spread function of the diffuser to recover the image, enabling an extremely compact, lightweight camera suitable for miniaturized or in-vivo applications.
How to Build the System
Place a thin diffuser (ground glass, engineered phase mask, or Scotch tape) at a fixed, small distance (~1-5 mm) from a bare sensor (CMOS, e.g., Sony IMX sensor). Precisely characterize the diffuser PSF by scanning a point source across the field of view. Mount rigidly to prevent any relative motion between diffuser and sensor. For 3-D reconstruction, the depth-dependent PSF must be calibrated at multiple axial planes.
Common Reconstruction Algorithms
- ADMM (alternating direction method of multipliers) with TV regularization
- Wiener deconvolution (fast, single-step but lower quality)
- Gradient descent with learned priors (DiffuserCam, neural network prior)
- Tikhonov-regularized least squares
- Unrolled optimization networks (physics-informed deep learning)
Common Mistakes
- Inaccurate PSF calibration causing reconstruction artifacts
- Insufficient sensor dynamic range for the caustic intensity peaks
- Motion between diffuser and sensor during capture invalidating the PSF model
- Regularization too strong, over-smoothing fine details in the reconstruction
- Ignoring the depth-dependence of the PSF when imaging 3-D scenes
How to Avoid Mistakes
- Calibrate PSF carefully with a point source at the exact sample distance
- Use HDR acquisition or high-bit-depth sensors to capture full caustic range
- Rigidly bond the diffuser to the sensor; verify alignment stability
- Tune regularization weight (e.g., via L-curve or cross-validation)
- Calibrate PSF at multiple depths for 3-D scenes; use depth-varying reconstruction
Forward-Model Mismatch Cases
- The widefield fallback uses a Gaussian PSF, but lensless cameras use a coded aperture (phase mask, diffuser, or amplitude mask) that creates a highly structured, non-Gaussian PSF — the caustic pattern is fundamentally different from a Gaussian
- The lensless PSF encodes the scene through a known, shift-variant pattern — the widefield shift-invariant Gaussian blur does not capture the scene-dependent structure of the lensless measurement and produces incorrect reconstruction input
How to Correct the Mismatch
- Use the lensless operator with the calibrated PSF of the specific coded aperture (measured from a point source or computed from the mask design): y = H * x, where H is the non-Gaussian, possibly shift-variant PSF
- Reconstruct using Wiener deconvolution, ADMM with TV prior, or learned methods (FlatNet, PhlatCam) that use the correct coded-aperture PSF for the specific mask in use
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Antipa et al., 'DiffuserCam: lensless single-exposure 3D imaging', Optica 5, 1-9 (2018)
- Asif et al., 'FlatCam: Thin, Lensless Cameras Using Coded Aperture', IEEE TCI 3, 384-397 (2017)
Canonical Datasets
- DiffuserCam lensless mirflickr dataset (Monakhova et al.)
- PhlatCam benchmark (Boominathan et al., IEEE TPAMI 2022)
Light-Sheet Fluorescence Microscopy
Light-sheet microscopy (LSFM / SPIM) illuminates the sample with a thin sheet of light perpendicular to the detection axis, providing intrinsic optical sectioning. Primary artifacts are stripe patterns caused by absorption and scattering in the illumination path, plus anisotropic PSF blur. The forward model is y = S(z) * (PSF_3d *** x) + n where S(z) models the stripe attenuation. Reconstruction involves destriping followed by optional deconvolution.
Light-Sheet Fluorescence Microscopy
Description
Light-sheet microscopy (LSFM / SPIM) illuminates the sample with a thin sheet of light perpendicular to the detection axis, providing intrinsic optical sectioning. Primary artifacts are stripe patterns caused by absorption and scattering in the illumination path, plus anisotropic PSF blur. The forward model is y = S(z) * (PSF_3d *** x) + n where S(z) models the stripe attenuation. Reconstruction involves destriping followed by optional deconvolution.
Principle
A thin sheet of laser light illuminates only the focal plane of the detection objective, providing intrinsic optical sectioning with minimal out-of-plane photobleaching. The orthogonal geometry between illumination and detection decouples sectioning from resolution. Detection is widefield, enabling fast volumetric imaging of large specimens.
How to Build the System
Arrange two orthogonal objective arms: one for the excitation sheet (cylindrical lens or digitally scanned Gaussian/Bessel beam) and one for detection (high-NA water-dipping). Mount the sample in agarose or hold in a chamber compatible with the dual-objective geometry. Use a fast sCMOS camera for detection. Stage scanning or sheet scanning acquires z-stacks. Consider diSPIM (dual-view) for isotropic resolution.
Common Reconstruction Algorithms
- Multi-view fusion (weighted averaging of complementary views)
- Multi-view deconvolution (Bayesian, joint Richardson-Lucy)
- Content-based image fusion
- Deep-learning denoising for high-speed acquisitions (CARE)
- Stripe artifact removal (wavelet-FFT filtering)
Common Mistakes
- Light sheet too thick, degrading axial resolution and sectioning
- Absorption and scattering in thick tissue causing shadow artifacts (stripes)
- Misalignment between sheet focal plane and detection focal plane
- Improper sample mounting causing drift or deformation during long acquisitions
- Ignoring refractive-index variations causing sheet deflection inside tissue
How to Avoid Mistakes
- Use Bessel or lattice light sheet for thin, uniform illumination profiles
- Pivot the light sheet or use dual-side illumination to reduce shadow artifacts
- Carefully co-align illumination and detection planes using fluorescent beads
- Use stable, low-melting-point agarose embedding and vibration-isolated stages
- Clear or match refractive index of tissue where possible; use adaptive optics
Forward-Model Mismatch Cases
- The widefield fallback processes only 2D (64,64) images, but light-sheet microscopy acquires 3D volumes (64,64,32) with intrinsic optical sectioning — the volumetric z-dimension is entirely lost
- Widefield illumination excites the entire sample volume causing out-of-focus blur, whereas the light sheet illuminates only the focal plane — the fallback forward model includes fluorescence contributions from planes that the real system never excites
How to Correct the Mismatch
- Use the lightsheet operator that processes 3D volumes with the sheet illumination profile: each z-slice is excited only by the thin (1-5 um) light sheet
- Model the sheet thickness and propagation (Gaussian or Bessel beam) explicitly; for multi-view systems, include the detection PSF from the orthogonal objective
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Huisken et al., 'Optical sectioning deep inside live embryos by SPIM', Science 305, 1007-1009 (2004)
- Power & Bhatt, 'A guide to light-sheet fluorescence microscopy for multiscale imaging', Nature Methods 14, 360-373 (2017)
Canonical Datasets
- OpenSPIM sample datasets
- Zebrafish developmental lightsheet atlas
Low-Dose Widefield Microscopy
Widefield fluorescence microscopy operated at very low illumination power or short exposure time to reduce phototoxicity and photobleaching in live specimens. Images are dominated by shot noise (Poisson) and read noise (Gaussian) with typical photon counts of 20-200 per pixel. The forward model is y = Poisson(alpha * PSF ** x)/alpha + N(0, sigma^2) where alpha is the photon conversion factor. Reconstruction requires joint denoising and deconvolution using PnP-HQS, Noise2Void, or CARE.
Low-Dose Widefield Microscopy
Description
Widefield fluorescence microscopy operated at very low illumination power or short exposure time to reduce phototoxicity and photobleaching in live specimens. Images are dominated by shot noise (Poisson) and read noise (Gaussian) with typical photon counts of 20-200 per pixel. The forward model is y = Poisson(alpha * PSF ** x)/alpha + N(0, sigma^2) where alpha is the photon conversion factor. Reconstruction requires joint denoising and deconvolution using PnP-HQS, Noise2Void, or CARE.
Principle
Identical optical path to standard widefield but operated at very low photon budgets (short exposure or attenuated excitation) to minimize phototoxicity in live cells. The acquired images are severely photon-starved, making Poisson noise the dominant degradation rather than out-of-focus blur.
How to Build the System
Use the same widefield microscope but reduce LED power to 1-5 % and/or shorten exposure to 5-20 ms. A high-QE back-illuminated sCMOS sensor (>80 % QE) is essential for capturing the limited photon signal. Install an environmental chamber for live-cell stability (37 °C, 5 % CO₂). Validate that the camera read noise floor is well below the expected signal.
Common Reconstruction Algorithms
- CARE (Content-Aware image REstoration)
- Noise2Void / Noise2Self (self-supervised denoising)
- BM3D / VST + BM3D for Poisson-Gaussian denoising
- PURE-LET (Poisson Unbiased Risk Estimator)
- Noise2Noise paired denoising networks
Common Mistakes
- Setting read-noise-dominated regime by using too-low gain or old CCD
- Training denoising networks on data with different noise statistics than test data
- Clipping near-zero intensities by incorrect camera offset subtraction
- Ignoring sCMOS pixel-dependent noise (fixed-pattern noise)
- Exceeding live-cell phototoxicity budget despite intending low-dose imaging
How to Avoid Mistakes
- Characterize camera noise model (gain, offset, variance map) before acquisition
- Train and evaluate denoising models at the same SNR and microscope settings
- Keep camera offset (dark current) calibration current and subtract properly
- Apply per-pixel gain and offset maps for sCMOS cameras
- Monitor cell health markers (morphology, division rate) to confirm non-toxic dose
Forward-Model Mismatch Cases
- The widefield fallback applies the correct blur kernel but uses a Gaussian noise model, whereas low-dose imaging is dominated by Poisson shot noise with very few photons per pixel
- Denoising algorithms trained on Gaussian noise statistics will underperform on Poisson-dominated low-dose data, producing biased estimates and residual artifacts
How to Correct the Mismatch
- Use the low-dose widefield operator that applies a Poisson-Gaussian noise model: y = Poisson(alpha * PSF ** x) / alpha + N(0, sigma^2)
- Train or select denoising algorithms that explicitly model Poisson statistics (Anscombe transform + BM3D, or Poisson-aware deep networks like Noise2Void)
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Krull et al., 'Noise2Void - Learning Denoising from Single Noisy Images', CVPR 2019
- Weigert et al., 'Content-aware image restoration (CARE)', Nature Methods 15, 1090-1097 (2018)
Canonical Datasets
- BioSR low-SNR subset
- Planaria / Tribolium datasets (Weigert et al.)
MINFLUX Nanoscopy
MINFLUX Nanoscopy
PALM/STORM Single-Molecule Localization
Photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve nanoscale resolution by stochastically activating sparse subsets of fluorescent molecules per frame, localizing each with sub-diffraction precision (proportional to sigma/sqrt(N) where N is detected photons), and accumulating localizations over thousands of frames. Typical localization precision is 10-30 nm. Primary challenges include overlapping emitters at high density, sample drift, and blinking statistics. Reconstruction uses Gaussian fitting (ThunderSTORM) or deep learning (DECODE).
PALM/STORM Single-Molecule Localization
Description
Photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve nanoscale resolution by stochastically activating sparse subsets of fluorescent molecules per frame, localizing each with sub-diffraction precision (proportional to sigma/sqrt(N) where N is detected photons), and accumulating localizations over thousands of frames. Typical localization precision is 10-30 nm. Primary challenges include overlapping emitters at high density, sample drift, and blinking statistics. Reconstruction uses Gaussian fitting (ThunderSTORM) or deep learning (DECODE).
Principle
Single-Molecule Localization Microscopy (PALM/STORM) achieves ~20 nm resolution by stochastically switching individual fluorophores between bright and dark states. In each frame, only a sparse subset of molecules emit, allowing their positions to be localized with sub-pixel precision by fitting 2-D Gaussians. Thousands of frames are accumulated and all localizations are plotted to form a super-resolution image.
How to Build the System
Use a TIRF microscope (100x 1.49 NA oil objective) with powerful laser excitation (200-500 mW at the sample, 647 nm for Alexa647 STORM or 561 nm for mEos PALM). TIRF geometry reduces background. An oxygen-scavenging buffer with thiol (MEA/BME) is critical for Alexa647 blinking. Use an EMCCD (Andor iXon 897) or fast sCMOS camera at 30-100 Hz frame rate. Acquire 10,000-50,000 frames.
Common Reconstruction Algorithms
- ThunderSTORM (ImageJ plugin, MLE/LSQ Gaussian fitting)
- SMLM ZOLA-3D (deep-learning 3D localization)
- DAOSTORM (multi-emitter fitting for high density)
- Drift correction (fiducial-based or cross-correlation)
- HAWK / ANNA-PALM (deep-learning for accelerated SMLM)
Common Mistakes
- Density of active emitters too high, causing overlapping PSFs and localization errors
- Insufficient photon count per localization, yielding poor precision (>30 nm)
- Sample drift during long acquisitions not corrected
- Poor blinking statistics (incomplete on-off switching) from wrong buffer conditions
- Mistaking fixed-pattern noise or autofluorescence for single molecules
How to Avoid Mistakes
- Tune activation laser to achieve sparse single-molecule density per frame
- Optimize buffer (pH, thiol concentration, oxygen scavenger) for bright blinks (>1000 photons)
- Include fiducial markers (gold beads or TetraSpeck) and apply drift correction
- Prepare fresh imaging buffer immediately before acquisition; degas thoroughly
- Apply quality filters (photon threshold, localization precision, PSF shape) in analysis
Forward-Model Mismatch Cases
- The widefield fallback produces a blurred intensity image, but PALM/STORM generates sparse single-molecule localizations — the correct forward model produces a list of (x,y,photons) events, not a convolved image
- Using a continuous PSF blur instead of the discrete point-emitter model (y = sum_i(n_i * PSF(r - r_i) + background)) means single-molecule fitting algorithms will receive incorrect input and localization precision estimates will be meaningless
How to Correct the Mismatch
- Use the PALM/STORM operator that simulates stochastic single-molecule activation: sparse emitters with Poisson photon counts, individually convolved with the PSF, on a per-frame basis
- Reconstruct using single-molecule localization (Gaussian fitting, MLE) on the correct sparse-emitter frames; the forward model must match the blinking kinetics and photon statistics of the fluorophore
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Betzig et al., 'Imaging intracellular fluorescent proteins at nanometer resolution', Science 313, 1642-1645 (2006)
- Rust et al., 'Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)', Nature Methods 3, 793-796 (2006)
- Speiser et al., 'Deep learning enables fast and dense single-molecule localization (DECODE)', Nature Methods 18, 1082-1090 (2021)
Canonical Datasets
- SMLM Challenge 2016 (Sage et al., Nature Methods 2019)
- ThunderSTORM tutorial datasets
Phase Contrast Microscopy
Phase Contrast Microscopy
Polarization Microscopy
Polarization microscopy measures anisotropic optical properties by analysing the polarisation state of light through the sample. In Mueller matrix imaging, the sample is illuminated with known polarisation states and the output is analysed, yielding a 4x4 Mueller matrix at each pixel encoding birefringence, optical activity, and depolarisation. The LC-PolScope uses liquid crystal retarders for rapid modulation. Reconstruction involves solving for Mueller elements and Lu-Chipman decomposition into physically meaningful parameters.
Polarization Microscopy
Description
Polarization microscopy measures anisotropic optical properties by analysing the polarisation state of light through the sample. In Mueller matrix imaging, the sample is illuminated with known polarisation states and the output is analysed, yielding a 4x4 Mueller matrix at each pixel encoding birefringence, optical activity, and depolarisation. The LC-PolScope uses liquid crystal retarders for rapid modulation. Reconstruction involves solving for Mueller elements and Lu-Chipman decomposition into physically meaningful parameters.
Principle
Polarization microscopy exploits the birefringence (orientation-dependent refractive index) of ordered biological structures such as collagen fibers, spindle microtubules, and crystalline inclusions. By analyzing the polarization state of transmitted or reflected light, structural anisotropy can be measured without fluorescent labeling. Quantitative techniques (LC-PolScope) measure both retardance magnitude and slow-axis orientation.
How to Build the System
Mount a liquid-crystal universal compensator (LC-PolScope by OpenPolScope, or Abrio system) on a standard brightfield microscope. Use strain-free optics and rotate the analyzer while keeping the polarizer fixed (or use a rotating stage). For quantitative imaging, acquire 4-5 images at different compensator settings. A monochromatic light source (546 nm green filter) minimizes chromatic effects.
Common Reconstruction Algorithms
- Mueller matrix decomposition (full polarimetric imaging)
- Jones calculus for coherent polarization analysis
- Background retardance subtraction
- Stokes parameter reconstruction from intensity measurements
- Deep-learning retardance estimation from fewer raw frames
Common Mistakes
- Strain birefringence in optical components contaminating the measurement
- Incorrect compensator calibration producing quantitative retardance errors
- Not accounting for sample tilt introducing apparent birefringence artifacts
- Using polychromatic light causing wavelength-dependent retardance errors
- Ignoring depolarization effects in thick or scattering samples
How to Avoid Mistakes
- Use strain-free objectives and verify zero retardance on a blank field
- Calibrate the liquid-crystal compensator at each session using a known retarder
- Ensure sample is flat and perpendicular to the optical axis
- Use narrow-band illumination or measure dispersion for wavelength correction
- For thick samples, consider Mueller matrix imaging to capture depolarization
Forward-Model Mismatch Cases
- The widefield fallback treats light as a scalar intensity, but polarization microscopy measures the full Mueller matrix or Stokes parameters — the vector nature of light (birefringence, dichroism, depolarization) is completely lost
- The fallback produces a single-channel image, but the correct operator generates 4+ channels (Stokes S0-S3 or multiple polarizer/analyzer orientations), each encoding different polarization properties of the sample
How to Correct the Mismatch
- Use the polarization operator that generates images at multiple polarizer/analyzer angles (0, 45, 90, 135 degrees), encoding the sample's Jones or Mueller matrix at each pixel
- Reconstruct birefringence retardance and orientation from the polarization-resolved measurements using Mueller calculus or Jones matrix decomposition
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Mehta et al., 'Quantitative polarized light microscopy using the LC-PolScope', Live Cell Imaging: A Laboratory Manual, CSHL Press (2010)
- Lu & Chipman, 'Interpretation of Mueller matrices based on polar decomposition', J. Opt. Soc. Am. A 13, 1106-1113 (1996)
Canonical Datasets
- OpenPolScope calibration data
- Collagen SHG/polarisation histopathology datasets
Second Harmonic Generation (SHG) Microscopy
Second Harmonic Generation (SHG) Microscopy
Spinning Disk Confocal Microscopy
Spinning Disk Confocal Microscopy
STED Microscopy
Stimulated emission depletion (STED) microscopy breaks the diffraction limit by overlaying the excitation focus with a doughnut-shaped depletion beam that forces fluorophores at the periphery back to the ground state via stimulated emission, effectively shrinking the fluorescent spot to 50 nm or below. The effective PSF width scales as d ~ lambda/(2*NA*sqrt(1 + I/I_s)) where I is the depletion intensity and I_s is the saturation intensity. Primary challenges include high depletion laser power causing photobleaching, and the photon-limited signal from the confined volume.
STED Microscopy
Description
Stimulated emission depletion (STED) microscopy breaks the diffraction limit by overlaying the excitation focus with a doughnut-shaped depletion beam that forces fluorophores at the periphery back to the ground state via stimulated emission, effectively shrinking the fluorescent spot to 50 nm or below. The effective PSF width scales as d ~ lambda/(2*NA*sqrt(1 + I/I_s)) where I is the depletion intensity and I_s is the saturation intensity. Primary challenges include high depletion laser power causing photobleaching, and the photon-limited signal from the confined volume.
Principle
Stimulated Emission Depletion microscopy breaks the diffraction limit by using a donut-shaped depletion beam to force fluorophores at the periphery of the excitation spot back to the ground state via stimulated emission. Only fluorophores at the very center of the donut emit spontaneously, shrinking the effective PSF to 30-70 nm lateral resolution depending on depletion power.
How to Build the System
Combine an excitation laser (e.g., 640 nm pulsed) with a co-aligned depletion laser (775 nm pulsed, ~1 ns) that passes through a vortex phase plate to create the donut. Use a high-NA objective (100x 1.4 NA oil). Time-gate detection (1-6 ns after excitation pulse) to reject depletion photon leakage. Single-photon counting detectors (APDs or hybrid PMTs) are essential. Align the donut null precisely at the excitation center.
Common Reconstruction Algorithms
- Richardson-Lucy deconvolution with STED PSF
- Wiener deconvolution with known STED PSF
- Deep-learning restoration (content-aware STED denoising)
- Linear unmixing for multi-color STED
- Time-gated STED (g-STED) background subtraction
Common Mistakes
- Misaligned donut null causing asymmetric PSF and resolution loss
- Excessive depletion power causing photobleaching of organic dyes
- Depletion laser leaking into fluorescence detection channel
- Insufficient time-gating, recording stimulated emission as signal
- Using fluorophores with poor STED compatibility (low stimulated-emission cross-section)
How to Avoid Mistakes
- Regularly check and optimize donut alignment using gold nanoparticle scattering
- Use STED-optimized dyes (ATTO647N, SiR, Abberior STAR) and minimize power
- Install proper spectral filters and use time-gating to reject depletion photons
- Apply 1-6 ns detection gate synchronized with the pulsed excitation
- Choose fluorophores specifically designed for STED with high photostability
Forward-Model Mismatch Cases
- The widefield fallback uses a diffraction-limited PSF (sigma=2.0, ~250 nm resolution), but STED achieves 30-70 nm resolution by shrinking the effective PSF with the depletion donut — the fallback is 4-8x wider
- The STED effective PSF depends on depletion beam power (d_eff = d_confocal / sqrt(1 + I_STED/I_sat)), making it fundamentally different from any fixed Gaussian — the fallback cannot model power-dependent resolution
How to Correct the Mismatch
- Use the STED operator with the effective PSF that accounts for depletion beam intensity: PSF_eff has FWHM = lambda/(2*NA*sqrt(1 + I/I_sat)), typically 30-70 nm
- Include the donut-shaped depletion profile and saturation intensity in the forward model; deconvolution with the correct sub-diffraction STED PSF recovers true super-resolution information
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Hell & Wichmann, 'Breaking the diffraction resolution limit by stimulated emission', Optics Letters 19, 780-782 (1994)
- Vicidomini et al., 'STED nanoscopy', Annual Review of Biophysics 47, 377-404 (2018)
Canonical Datasets
- BioSR STED paired dataset (Zhang et al., Nature Methods 2023)
- Abberior STED application note sample images
Structured Illumination Microscopy
Structured illumination microscopy (SIM) achieves ~2x lateral resolution improvement by illuminating the sample with sinusoidal patterns at multiple orientations and phases. Frequency mixing between the illumination pattern and sample structure shifts high-frequency information into the microscope passband. Reconstruction separates and reassembles frequency components via Wiener-SIM or deep-learning SIM. The forward model is y_k = PSF ** (I_k * x) + n for each pattern k.
Structured Illumination Microscopy
Description
Structured illumination microscopy (SIM) achieves ~2x lateral resolution improvement by illuminating the sample with sinusoidal patterns at multiple orientations and phases. Frequency mixing between the illumination pattern and sample structure shifts high-frequency information into the microscope passband. Reconstruction separates and reassembles frequency components via Wiener-SIM or deep-learning SIM. The forward model is y_k = PSF ** (I_k * x) + n for each pattern k.
Principle
Structured Illumination Microscopy projects a known sinusoidal pattern onto the specimen, shifting high-frequency spatial information into the observable passband via Moiré interference. Multiple images (typically 9-15) are acquired at different pattern orientations and phases, then computationally recombined in Fourier space to achieve ~2× lateral resolution improvement beyond the diffraction limit.
How to Build the System
Install a SIM-capable microscope (Nikon N-SIM, Zeiss Elyra 7, or custom with SLM/DMD). Use a high-NA objective (100x 1.49 NA TIRF) for maximum frequency extension. The illumination grating (SLM or fiber interference) generates the sinusoidal pattern. Acquire 3 orientations × 3-5 phases. A fast sCMOS camera captures all raw frames in ~100-500 ms for 2D-SIM. Careful alignment of the pattern contrast is critical.
Common Reconstruction Algorithms
- Gustafsson/Heintzmann frequency-domain SIM reconstruction
- Open-source fairSIM (ImageJ plugin)
- Wiener-filtered order separation and recombination
- Deep-learning SIM (ML-SIM, reconstruction from fewer frames)
- Hessian-SIM for live-cell with reduced artifacts
Common Mistakes
- Insufficient pattern contrast causing weak Moiré fringes and honeycomb artifacts
- Misaligned illumination orders producing stripe artifacts in the reconstruction
- Over-processing (too aggressive Wiener parameter) creating ringing artifacts
- Using objectives with insufficient NA for the desired resolution gain
- Photobleaching between pattern acquisitions causing intensity inconsistency
How to Avoid Mistakes
- Verify pattern contrast >0.5 on a thin uniform fluorescent layer before experiments
- Calibrate illumination pattern positions/angles using SIMcheck (ImageJ plugin)
- Tune the Wiener parameter conservatively; use SIMcheck to assess reconstruction quality
- Use 1.49 NA objectives for maximum resolution; 1.40 NA limits SIM performance
- Minimize total acquisition time; use fast cameras and short exposures
Forward-Model Mismatch Cases
- The widefield fallback produces a single (64,64) blurred image, but SIM requires 9-15 raw frames (3 orientations x 3-5 phases) with structured illumination patterns — output shape (64,64,9) vs (64,64)
- Without the sinusoidal illumination pattern encoding, the high-frequency information that SIM moves into the passband via Moiré interference is completely absent — no super-resolution is possible
How to Correct the Mismatch
- Use the SIM operator that generates multiple pattern-modulated images: y_k = (1 + m*cos(k_i*r + phi_j)) * (PSF ** x) for each orientation i and phase j
- Reconstruct using Fourier-space order separation and recombination (Gustafsson method) or deep-learning SIM, which require the correct multi-frame structured illumination forward model
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Gustafsson, 'Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy', J. Microsc. 198, 82-87 (2000)
- Muller & Bhatt, 'Open-source image reconstruction of super-resolution structured illumination microscopy data (fairSIM)', Nature Comms 7, 10980 (2016)
Canonical Datasets
- BioSR SIM paired dataset (Zhang et al., Nature Methods 2023)
- fairSIM test datasets (Hagen et al.)
Three-Photon Microscopy
Three-Photon Microscopy
TIRF Microscopy
Total internal reflection fluorescence (TIRF) microscopy selectively excites fluorophores within ~100-200 nm of the coverslip surface using the evanescent field generated when excitation light undergoes total internal reflection at the glass-sample interface. This provides exceptional axial selectivity for imaging membrane-associated events such as vesicle fusion and focal adhesions. The lateral image follows standard widefield PSF convolution but with near-zero out-of-focus background. Primary degradations include non-uniform evanescent field and interference fringes from coherent illumination.
TIRF Microscopy
Description
Total internal reflection fluorescence (TIRF) microscopy selectively excites fluorophores within ~100-200 nm of the coverslip surface using the evanescent field generated when excitation light undergoes total internal reflection at the glass-sample interface. This provides exceptional axial selectivity for imaging membrane-associated events such as vesicle fusion and focal adhesions. The lateral image follows standard widefield PSF convolution but with near-zero out-of-focus background. Primary degradations include non-uniform evanescent field and interference fringes from coherent illumination.
Principle
Total Internal Reflection Fluorescence microscopy creates an evanescent wave that penetrates only ~100-200 nm into the sample when the excitation beam is totally internally reflected at the glass-sample interface. This provides excellent optical sectioning of membrane-proximal events (vesicle fusion, protein dynamics at the plasma membrane) with very low background.
How to Build the System
Use a TIRF-capable objective (60-100x, 1.49 NA oil) on an inverted microscope. Launch the laser at the critical angle through the objective periphery (objective-type TIRF) or through a prism (prism-type TIRF). Verify total internal reflection by observing the evanescent field depth with a calibration sample. Cells must be plated on clean, high-RI coverslips (#1.5H, 170 μm).
Common Reconstruction Algorithms
- Single-particle tracking (SPT) algorithms
- Multi-angle TIRF for axial sectioning (variable penetration depth)
- Denoising (Gaussian filtering, wavelet, or deep-learning)
- Photobleaching step analysis for molecular counting
- Temporal median filtering for background subtraction
Common Mistakes
- Laser angle not precisely at TIR, partially exciting bulk fluorescence
- Dirty coverslips causing scattering and destroying evanescent field uniformity
- Cells not well-adhered to the coverslip surface, out of evanescent field range
- Using objectives with NA < 1.45, insufficient for TIR at aqueous interfaces
- Evanescent field depth not calibrated, making quantitative axial analysis unreliable
How to Avoid Mistakes
- Fine-tune the TIR angle while observing a known sample; verify exponential depth decay
- Clean coverslips rigorously (plasma cleaning or acid wash) before plating cells
- Use poly-L-lysine or fibronectin coating to ensure cells adhere to the coverslip
- Use 1.49 NA objectives; 1.45 NA is the minimum for aqueous TIR
- Calibrate evanescent field depth using fluorescent beads at known axial positions
Forward-Model Mismatch Cases
- The widefield fallback illuminates the entire sample depth, but TIRF uses an evanescent wave that penetrates only ~100-200 nm from the coverslip — the fallback includes fluorescence from hundreds of nanometers deeper, adding massive background
- The exponential axial intensity decay of the evanescent field (I(z) = I_0 * exp(-z/d), d~100 nm) is not modeled by the widefield fallback — quantitative axial information (membrane proximity) is lost
How to Correct the Mismatch
- Use the TIRF operator that models evanescent-wave excitation: only fluorophores within ~200 nm of the glass-sample interface contribute signal, with exponentially decaying excitation intensity
- Include the penetration depth d = lambda/(4*pi*sqrt(n1^2*sin^2(theta) - n2^2)) in the forward model; for multi-angle TIRF, model the depth-dependent excitation for each incidence angle
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Axelrod, 'Total internal reflection fluorescence microscopy in cell biology', Traffic 2, 764-774 (2001)
Canonical Datasets
- Cell Tracking Challenge TIRF sequences
- FPbase TIRF imaging examples
Two-Photon / Multiphoton Microscopy
Two-photon microscopy uses ultrashort pulsed near-infrared laser light (typically 700-1000 nm) to excite fluorophores via simultaneous absorption of two photons, providing intrinsic optical sectioning because excitation only occurs at the focal volume where photon density is sufficiently high. The longer excitation wavelength enables imaging depths of 500-1000 um in scattering tissue (e.g., brain), making it the standard for in vivo neuroscience. The point-spread function is effectively the square of the excitation PSF. Primary degradations include scattering-induced signal loss with depth and wavefront aberrations from tissue inhomogeneity.
Two-Photon / Multiphoton Microscopy
Description
Two-photon microscopy uses ultrashort pulsed near-infrared laser light (typically 700-1000 nm) to excite fluorophores via simultaneous absorption of two photons, providing intrinsic optical sectioning because excitation only occurs at the focal volume where photon density is sufficiently high. The longer excitation wavelength enables imaging depths of 500-1000 um in scattering tissue (e.g., brain), making it the standard for in vivo neuroscience. The point-spread function is effectively the square of the excitation PSF. Primary degradations include scattering-induced signal loss with depth and wavefront aberrations from tissue inhomogeneity.
Principle
Two-photon excitation uses a pulsed near-infrared laser so that two photons are absorbed simultaneously by a fluorophore, producing fluorescence equivalent to a single photon of half the wavelength. Because absorption depends on the square of intensity, fluorescence is generated only at the tight focus, providing intrinsic optical sectioning without a pinhole. Deep tissue penetration (up to ~1 mm) is achieved due to reduced scattering at NIR wavelengths.
How to Build the System
Install a mode-locked Ti:Sapphire laser (680-1080 nm, ~100 fs pulses, 80 MHz, Coherent Chameleon or Spectra-Physics InSight) on a laser-scanning microscope. Use a high-NA water-dipping objective (25x 1.05 NA or 20x 1.0 NA) for deep imaging. Non-descanned detectors (GaAsP PMTs) collect scattered fluorescence close to the objective for maximum efficiency. Add a Pockels cell for fast power modulation.
Common Reconstruction Algorithms
- Adaptive background subtraction for in-depth imaging
- Motion correction and image registration for in-vivo data
- Suite2p / CaImAn (calcium imaging segmentation and trace extraction)
- Deep-learning denoising (DeepInterpolation, Noise2Void)
- Attenuation compensation (exponential depth correction)
Common Mistakes
- Excessive laser power causing photodamage and heating deep in tissue
- Pre-chirp not compensated, broadening pulses and reducing two-photon efficiency
- Crosstalk between emission channels when using multiple fluorophores
- Brain motion artifacts in in-vivo imaging not corrected
- Imaging too deep without correcting for signal attenuation with depth
How to Avoid Mistakes
- Titrate laser power to minimum effective level; monitor for tissue damage signs
- Use a prism-pair or grating pre-chirp compressor to maintain short pulses at the focus
- Select well-separated emission spectra and use appropriate dichroics and filters
- Apply real-time or post-hoc motion correction algorithms (rigid or non-rigid)
- Use adaptive optics or longer-wavelength excitation (three-photon) for deep tissue
Forward-Model Mismatch Cases
- The widefield fallback uses a linear Gaussian PSF, but two-photon excitation depends on intensity squared (I^2), producing a much tighter effective PSF — the fallback PSF is 40-60% wider than the true two-photon PSF
- The widefield model applies uniform illumination, but two-photon intrinsically provides optical sectioning (only the focal volume has sufficient intensity for I^2 absorption) — the out-of-focus background model is fundamentally wrong
How to Correct the Mismatch
- Use the two-photon operator with the squared PSF: effective_PSF = PSF_excitation^2, which is ~1.4x narrower than the single-photon PSF
- Model the nonlinear excitation correctly; for deep tissue, include scattering-induced PSF broadening and signal attenuation with depth
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Denk et al., 'Two-photon laser scanning fluorescence microscopy', Science 248, 73-76 (1990)
- Helmchen & Denk, 'Deep tissue two-photon microscopy', Nature Methods 2, 932-940 (2005)
Canonical Datasets
- Allen Brain Observatory two-photon calcium imaging
- Stringer et al. (2019) mouse V1 two-photon dataset
Widefield Fluorescence Microscopy
Standard widefield epi-fluorescence microscopy where the entire field of view is illuminated simultaneously and the image is formed by convolution of the specimen fluorescence distribution with the system point spread function (PSF). Out-of-focus blur from planes above and below the focal plane is the primary degradation. The forward model is y = PSF ** x + n, where ** denotes convolution and n is mixed Poisson-Gaussian noise. Deconvolution via Richardson-Lucy or learned priors (CARE) restores resolution toward the diffraction limit.
Widefield Fluorescence Microscopy
Description
Standard widefield epi-fluorescence microscopy where the entire field of view is illuminated simultaneously and the image is formed by convolution of the specimen fluorescence distribution with the system point spread function (PSF). Out-of-focus blur from planes above and below the focal plane is the primary degradation. The forward model is y = PSF ** x + n, where ** denotes convolution and n is mixed Poisson-Gaussian noise. Deconvolution via Richardson-Lucy or learned priors (CARE) restores resolution toward the diffraction limit.
Principle
The entire specimen is illuminated uniformly and fluorescence from all planes is collected simultaneously. The image is the convolution of the 3-D fluorescence distribution with the microscope point-spread function (PSF), dominated by out-of-focus blur from planes above and below the focal plane.
How to Build the System
Mount an infinity-corrected high-NA objective (≥1.3 NA oil) on an inverted body (Nikon Ti2 or Zeiss Observer). Install a multi-band LED engine (e.g., Lumencor SPECTRA X) coupled through a liquid light guide. Select matched excitation/dichroic/emission filter sets. Focus Köhler illumination for flat-field. Attach an sCMOS camera (Hamamatsu Flash4 or Photometrics Prime BSI) at the side port. Calibrate pixel size with a stage micrometer.
Common Reconstruction Algorithms
- Richardson-Lucy deconvolution
- Wiener filtering
- CARE (Content-Aware image REstoration) deep-learning deconvolution
- Total-variation regularized deconvolution
- Blind deconvolution (PSF estimation + image update)
Common Mistakes
- Using an incorrect or measured PSF with wrong refractive-index setting
- Ignoring flatfield non-uniformity, leading to intensity shading
- Over-iterating Richardson-Lucy causing noise amplification
- Mismatched immersion medium vs. coverslip thickness causing spherical aberration
- Not correcting for photobleaching across a time-lapse series
How to Avoid Mistakes
- Measure the PSF with sub-diffraction beads at the same coverslip/medium as the sample
- Acquire and apply a flatfield correction image before deconvolution
- Use regularization or early stopping (monitor residual) in iterative deconvolution
- Match immersion oil RI to the coverslip and mounting medium specifications
- Normalize intensity per frame or use photobleaching-corrected models
Forward-Model Mismatch Cases
- No forward-model mismatch: the widefield Gaussian blur IS the correct operator for this modality (sigma=2.0 PSF convolution)
- Minor mismatch may arise if the actual microscope PSF differs from the default Gaussian (e.g., measured PSF with aberrations)
How to Correct the Mismatch
- The default widefield operator is already correct; no correction needed
- For higher fidelity, replace the Gaussian PSF with a measured or Born & Wolf PSF model matching the actual objective NA and wavelength
Experimental Setup — Signal Chain
Experimental Setup — Details
Key References
- Richardson, 'Bayesian-based iterative method of image restoration', J. Opt. Soc. Am. 62, 55-59 (1972)
- Weigert et al., 'Content-aware image restoration (CARE)', Nature Methods 15, 1090-1097 (2018)
Canonical Datasets
- BioSR (Zhang et al., Nature Methods 2023)
- Hagen et al. widefield deconvolution benchmark