Integral Photography

integral Computational Light Field Ray
View Benchmarks (1)

Integral photography (IP), originally proposed by Lippmann in 1908, captures a light field using a fly-eye lens array (matrix of small lenses) where each lenslet records a small elemental image from a slightly different perspective. The array of elemental images encodes 3D scene information, enabling computational refocusing, depth estimation, and autostereoscopic 3D display. Compared to microlens-based plenoptic cameras, IP typically uses larger lenslets with correspondingly more pixels per lens. Reconstruction includes depth-from-correspondence between elemental images and 3D focal stack computation.

Forward Model

Elemental Image Formation

Noise Model

Gaussian

Default Solver

depth estimation

Sensor

CMOS

Forward-Model Signal Chain

Each primitive represents a physical operation in the measurement process. Arrows show signal flow left to right.

Pi lens-array Lens Array Projection D g, η₁ Sensor
Spec Notation

Π(lens-array) → D(g, η₁)

Benchmark Variants & Leaderboards

Integral

Integral Photography

Full Benchmark Page →
Spec Notation

Π(lens-array) → D(g, η₁)

Standard Leaderboard (Top 10)

# Method Score PSNR (dB) SSIM Trust Source
🥇 DistgSSR 0.822 35.8 0.950 ✓ Certified Wang et al., CVPR 2022
🥈 LFAttNet 0.768 33.5 0.920 ✓ Certified Tsai et al., IEEE TIP 2020
🥉 PnP-LF 0.648 29.0 0.830 ✓ Certified PnP-ADMM with LF prior
4 Shift-and-Add 0.517 25.0 0.700 ✓ Certified Ng et al., Stanford Tech Report 2005
Mismatch Parameters (3) click to expand
Name Symbol Description Nominal Perturbed
lens_pitch Δp Lens pitch error (μm) 0 1.0
gap_distance Δd Lens-to-sensor gap error (μm) 0 5.0
aberration ΔW Lens aberration (waves) 0 0.1

Reconstruction Triad Diagnostics

The three diagnostic gates (G1, G2, G3) characterize how reconstruction quality degrades under different error sources. Each bar shows the relative attribution.

G1 — Forward Model Accuracy How well does the mathematical model match reality?

Model: elemental image formation — Mismatch modes: microlens alignment, crosstalk, fill factor loss, field curvature, depth reversal

G2 — Noise Characterization Is the noise model correctly specified?

Noise: gaussian — Typical SNR: 22.0–42.0 dB

G3 — Calibration Quality Are instrument parameters accurately measured?

Requires: microlens pitch, microlens focal length, sensor pixel size, display gap

Modality Deep Dive

Principle

Integral photography (also known as integral imaging) uses a 2-D array of elemental lenses to capture multi-perspective views of a 3-D scene simultaneously. Each elemental lens records a small perspective image, and the full set encodes the 4-D light field. Computational reconstruction produces 3-D images that can be viewed from different angles or refocused without glasses.

How to Build the System

Place a 2-D microlens or lenslet array (pitch 0.5-1 mm, ~50-200 elements per side) at one focal length from a high-resolution sensor. Each lenslet forms a separate elemental image. For display: show the integral image on a high-resolution display with a matched output lenslet array. Calibrate lenslet grid alignment, individual lens focal lengths, and vignetting correction. Use telecentric imaging for uniform magnification.

Common Reconstruction Algorithms

  • Computational refocusing via pixel rearrangement and summation
  • Depth estimation from elemental image disparity analysis
  • 3-D scene reconstruction from integral images
  • Super-resolution integral imaging (combining multiple shifted captures)
  • Deep-learning integral image reconstruction and view synthesis

Common Mistakes

  • Lenslet array not properly aligned with the sensor pixel grid
  • Insufficient number of elemental lenses for the desired depth range
  • Crosstalk between adjacent elemental images due to lens aberrations
  • Not correcting for vignetting variations across the lenslet array
  • Pseudoscopic (depth-reversed) images if reconstruction is not properly handled

How to Avoid Mistakes

  • Align lenslet array to sensor with precision jigs and verify with calibration patterns
  • Design lenslet pitch and focal length for the required depth-of-field
  • Use high-quality molded lenslets and baffles to minimize crosstalk
  • Apply per-lenslet calibration including vignetting and distortion correction
  • Use computational depth inversion to correct pseudoscopic effects

Forward-Model Mismatch Cases

  • The widefield fallback produces a single-perspective blurred image, but integral imaging captures multiple sub-aperture views through a lenslet array — each elemental image sees the scene from a slightly different angle
  • Without the lenslet-array angular encoding, depth information (parallax between views) is lost — computational refocusing and 3D reconstruction from the fallback output are impossible

How to Correct the Mismatch

  • Use the integral imaging operator that models the lenslet array: each microlens captures a different angular perspective, encoding the 4D light field on the 2D sensor
  • Reconstruct depth maps via disparity estimation between elemental images, and perform computational refocusing using pixel rearrangement and summation across sub-aperture views

Experimental Setup

Instrument

Custom integral imaging setup / ETRI prototype

Micro Lens Pitch Mm

1.0

Micro Lens Na

0.16

Sensor Pixel Um

5.5

Pixels Per Lens

20x20

Reconstruction

3D focal-stack / depth estimation

Signal Chain Diagram

Experimental setup diagram for Integral Photography

Key References

  • Lippmann, C. R. Acad. Sci. Paris 146, 446 (1908)
  • Park et al., 'Recent progress in 3D imaging systems', J. Opt. Soc. Am. A 26, 2538 (2009)

Canonical Datasets

  • ETRI integral imaging test set
  • Middlebury multi-view stereo (adapted)

Related Modalities

Benchmark Pages