Light Field Imaging
Light field imaging captures the full 4D radiance function L(x,y,u,v) describing both spatial position (x,y) and angular direction (u,v) of light rays. A microlens array placed before the sensor captures multiple sub-aperture views simultaneously, enabling post-capture refocusing, depth estimation, and perspective shifts. Each microlens images the objective's exit pupil, trading spatial resolution for angular resolution. The 4D light field can be processed with shift-and-sum for refocusing, disparity estimation for depth, or epipolar-plane image (EPI) analysis. Primary challenges include the inherent spatial-angular resolution tradeoff and microlens aberrations.
Plenoptic Sampling
Gaussian
shift and sum
CMOS_WITH_MICROLENS
Forward-Model Signal Chain
Each primitive represents a physical operation in the measurement process. Arrows show signal flow left to right.
Π(micro-lens) → D(g, η₁)
Benchmark Variants & Leaderboards
Light Field
Light Field Imaging
Π(micro-lens) → D(g, η₁)
Standard Leaderboard (Top 10)
| # | Method | Score | PSNR (dB) | SSIM | Trust | Source |
|---|---|---|---|---|---|---|
| 🥇 | DistgSSR | 0.816 | 35.5 | 0.948 | ✓ Certified | Wang et al., CVPR 2022 |
| 🥈 | LFNet | 0.758 | 33.0 | 0.915 | ✓ Certified | Wang et al., IEEE TPAMI 2020 |
| 🥉 | PnP-LF | 0.635 | 28.5 | 0.820 | ✓ Certified | PnP-ADMM with angular prior |
| 4 | Shift-and-Sum | 0.503 | 24.5 | 0.690 | ✓ Certified | Ng et al., Stanford Tech Report 2005 |
Mismatch Parameters (3) click to expand
| Name | Symbol | Description | Nominal | Perturbed |
|---|---|---|---|---|
| microlens_pitch | Δp | Micro-lens pitch error (μm) | 0 | 0.5 |
| main_lens_f | Δf | Main lens focal length error (mm) | 0 | 0.1 |
| vignetting | Δv | Vignetting error (%) | 0 | 3.0 |
Reconstruction Triad Diagnostics
The three diagnostic gates (G1, G2, G3) characterize how reconstruction quality degrades under different error sources. Each bar shows the relative attribution.
Model: plenoptic sampling — Mismatch modes: microlens crosstalk, vignetting, depth range limitation, angular aliasing
Noise: gaussian — Typical SNR: 20.0–40.0 dB
Requires: microlens calibration, pixel to ray mapping, vignetting correction, white balance
Modality Deep Dive
Principle
Light-field imaging captures both the spatial position and direction of light rays in a scene, recording a 4-D light field L(u,v,s,t) where (u,v) parameterize the aperture and (s,t) parameterize the spatial position. This enables computational refocusing, depth estimation, and novel viewpoint synthesis from a single capture. A microlens array placed before the sensor trades spatial resolution for angular resolution.
How to Build the System
Place a microlens array (MLA) at the sensor plane of a camera, one focal length in front of the image sensor. Each microlens captures the angular distribution of light from a corresponding spatial position (Lytro-style plenoptic camera). Alternative: use a camera array (e.g., 4×4 or 8×8 synchronized cameras) for higher angular and spatial resolution. Calibrate MLA alignment, microlens pitch, and main lens parameters.
Common Reconstruction Algorithms
- Shift-and-sum refocusing (synthetic aperture)
- Depth estimation from disparity between sub-aperture images
- Fourier slice theorem for light-field refocusing
- Light-field super-resolution (recovering spatial resolution lost to MLA)
- Deep-learning view synthesis (light field reconstruction from sparse views)
Common Mistakes
- Microlens array misaligned with sensor pixels, causing vignetting and crosstalk
- Insufficient angular samples for accurate depth estimation in textureless regions
- Not calibrating MLA-to-sensor alignment, producing decoding artifacts
- Confusing spatial and angular resolution trade-off limits of the plenoptic design
- Ignoring diffraction effects at the microlens apertures
How to Avoid Mistakes
- Precisely align MLA to sensor with sub-pixel accuracy; use calibration targets
- Increase camera array density or use coded-aperture techniques for more angular samples
- Calibrate using a white image and point-source images for precise microlens grid mapping
- Design the system with the desired spatial-angular trade-off explicitly computed
- Use microlens diameters larger than the diffraction limit (> 10× wavelength)
Forward-Model Mismatch Cases
- The widefield fallback produces a single (64,64) image, but a light field camera captures both spatial and angular information via a microlens array — the output encodes multiple sub-aperture views for computational refocusing
- Without the angular dimension (directions of light rays), depth estimation from parallax and computational refocusing are impossible — the widefield model captures only a single perspective
How to Correct the Mismatch
- Use the light field operator that models the microlens array: each microlens captures light from different angular directions, producing an (x, y, u, v) 4D light field on the 2D sensor
- Reconstruct depth maps from sub-aperture disparity, perform computational refocusing via shift-and-sum, or apply light-field super-resolution to trade angular for spatial resolution
Experimental Setup
Lytro Illum / Raytrix R42
14
9x9 (HCI) / 15x15 (Lytro Illum)
7728x5368
434x625
HCI 4D LF Benchmark, Stanford Lego Gantry
Signal Chain Diagram
Key References
- Levoy & Hanrahan, 'Light field rendering', SIGGRAPH 1996
- Ng et al., 'Light field photography with a hand-held plenoptic camera', Stanford Tech Report CTSR 2005-02
Canonical Datasets
- HCI 4D Light Field Benchmark
- Stanford Lego Gantry Archive
- INRIA Lytro Light Field Dataset