Contribute to PWM

Help grow the Physics World Model benchmark. Your contributions power better evaluation, stronger algorithms, and broader coverage across imaging modalities. Contributors earn credits that can be used for computation across all 4 SpecLab usage types.

How Credits Work

Earn by contributing

Upload algorithms, datasets, or spec.md solutions. Profits from others using your contribution flow back as credits to your account.

Spend on compute

Use credits to run SpecLab experiments — reconstruct, correct mismatch, design systems, or run physics simulations on GPU.

Buy to support PWM

Buy credits to support PWM development. We are a small team building a universal physics simulation platform for imaging science.

5 Contribution Types

1

Reconstruction Algorithm

Usage Type 1

Contribute a pure reconstruction algorithm for one or more modalities. Your algorithm runs on existing PWM benchmark data and scores are published on the leaderboard.

Python code + pretrained weights (if deep learning)
Target modality and expected PSNR/SSIM
Algorithm description and paper link (optional)
Earn credits every time someone runs your algorithm
2

Mismatch Correction + Reconstruction Algorithm

Usage Type 2

Contribute a joint mismatch-correction and reconstruction method. Your method is evaluated on challenge datasets where the forward model has calibration errors.

Mismatch correction code + reconstruction code
Mismatch type and correction range
Evaluated on PWM challenge (public, dev, hidden) tiers
Earn credits from challenge leaderboard ranking
3

Standard Dataset for a Modality

Benchmark Data

Contribute a real measurement dataset for a specific modality. Standard datasets expand the benchmark beyond synthetic data and improve algorithm generalization.

HDF5 measurements (.h5) with ground truth
Forward model specification (forward matrix or DAG)
Imaging system description + acquisition params
Earn credits when your data powers benchmark runs
4

Challenge Dataset for a Modality

3-Tier Challenge

Contribute a full challenge dataset with known calibration mismatch. Includes public (with ground truth), dev (blind scoring), and hidden (server-only) tiers.

Public tier
With x_true, for development
Dev tier
Blind scoring, 20 samples
Hidden tier
Server-only, adversarial
5

New spec.md + Solution

SpecLab

Define a new imaging problem as a spec.md and provide a reference solution. This expands SpecLab's knowledge base for all 4 usage types.

spec.md file with complete system description
Reference solution code + reproducible results
Any of the 4 usage types (reconstruct, mismatch, design, simulate)
Earn credits whenever users run your spec.md

Dataset Format Reference (HDF5)

All benchmark datasets use HDF5 format. Each sample is a group under the root:

/sample_00/
  ├── y              — measurements array (sinogram, k-space, CASSI snapshot, etc.)
  ├── H_ideal        — ideal forward model (angles, mask, PSF, or sensing matrix)
  ├── x_true         — ground truth signal (public + hidden tiers only)
  ├── spec_ranges    — JSON attr: [{"name", "min", "max", "unit"}, ...]
  ├── metadata       — JSON attr: {"scene", "shape", "noise_model"}
  └── true_spec      — JSON attr: {"param": value}  (public + hidden only)
/sample_01/ ...

File attributes:
  variant       — variant key (e.g. "ct", "mri", "cassi")
  tier          — "public", "dev", or "hidden"
  version       — "1.0"
  runner_type   — forward model type ("radon", "kspace", "cassi_disp", ...)

Contact platformaigpt@gmail.com for large datasets (> 50 MB) or to discuss new modality additions.

Submission Guidelines

  • ·All contributions are reviewed by the PWM team before being published.
  • ·Datasets must be in HDF5 (.h5) or NumPy (.npy/.npz) format. Maximum 50 MB per upload.
  • ·Algorithm code must be Python 3.10+ and runnable within 5 minutes on a T4 GPU.
  • ·Include a paper or arXiv link if applicable — this improves visibility on the leaderboard.
  • ·By submitting, you agree that your contribution is shared under the CC BY 4.0 license.