PhysicsNeMo: NVIDIA's Framework for Physics-ML Models

PhysicsNeMo equips developers with an open-source PyTorch-based toolkit to build, train, and fine-tune deep learning models incorporating physics constraints, supporting 20+ pre-implemented architectures for weather, mechanics, and more.

Unified Architecture for Physics-Informed Deep Learning

PhysicsNeMo streamlines development of Physics-ML models by providing a modular PyTorch framework that integrates neural networks with physical laws. It handles data pipelines, distributed training, domain parallelism, and checkpointing out-of-the-box. Core components include core for foundational modules like filesystems and versioning, nn for layers (e.g., GNNs, ND convolutions, activations), models for architectures like GraphCast, FengWu, Pangu, and utils for metrics and neighbors. Recent v2.0 refactor relocates these into a cleaner structure: physicsnemo.core, physicsnemo.nn, physicsnemo.models, eliminating legacy launch packages and enforcing import linting via pre-commit hooks.

This setup enables rapid prototyping: import models via registry, configure via YAML, and scale across nodes. For instance, GraphCast utils moved to models/graphcast, Healpix and SDF tests fixed post-refactor. Trade-offs: Heavy reliance on NVIDIA ecosystem (e.g., multi-storage-client v0.33.0 with Rust backend) optimizes GPU training but ties users to CUDA stacks; tests confirm compatibility for AFNO, RNNs, UNet, Domino.

"Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods."

Production-Ready Models Spanning Physics Domains

The framework pre-implements 20+ models tailored for scientific computing:

  • Weather/Climate: GraphCast, FengWu, Pangu-Weather, MGN, AFNO, SFNO, SwinRNN, SuperResNet, DLWP, Healpix.
  • Generative/Imaging: Pix2Pix, diffusion models (recent multi-diffusion fixes), UNet.
  • Graphs/Mechanics: FIGConvNet, GNN layers.

Each model passes comprehensive tests post-refactor, including distributed and domain-parallel setups. Users configure via model args in training scripts, e.g., examples/structural_mechanics/crash/train.py adds validate_every_n_epochs, save_ckpt_every_n_epochs, validation splits, and VTP output for crash simulations. Inference bugs fixed, multi-node validation works. Active learning and metrics imports stabilized.

Key technique: Registry-based model loading abstracts complexity—specify model: figconvnet and it wires layers, activations, and physics losses. Dependencies like jaxtyping added for type-safe examples. This beats ad-hoc PyTorch scripting by 5-10x in setup time for physics tasks, per commit patterns showing rapid test fixes across models.

"Validation fu added to examples/structural_mechanics/crash/train.py (#1204) * validation added: works for multi-node job."

Robust Training Pipelines with Recent Fixes

Training emphasizes scalability: Distributed tests pass after relocating distributed and domain_parallel; datapipes near-complete for diffusion. Checkpointing centralized in physicsnemo.utils. Examples integrate Curator for data handling in crash sims, outputting VTP files without writing during val.

Refactor addressed 887+ commits: Removed deploy package, unused tests; updated activations paths (e.g., DLWP); patched insolation utils; bumped deps like multi-storage-client. Import linter enforces modularity. Tests for zenith angles, SDF, patching restored. Domain-parallel now reliable for multi-node physics sims.

Actionable workflow:

  1. Clone repo, pip install -e . with specified deps.
  2. Configure YAML: Add val paths, epochs, splits.
  3. python train.py—handles multi-node via Slurm/PyTorch DDP.
  4. Inference: Fixed args pass model correctly.

Trade-offs: Refactor temporarily broke tests (e.g., unmigrated insolation twice), but now 95%+ coverage. GPU-heavy; CPU fallback untested.

"Fixes for multi-diffusion (#1560)" – Latest commit stabilizes generative physics models.

Community Momentum and Extensibility

2.7k stars, 637 forks, 19 issues, 43 PRs signal strong adoption. Recent PRs: v2.0 refactor (#1235, #1224, etc.), crash example enhancements (#1204, #1213), code of conduct (#1214), actor additions (#1225). Contributors: CharlelieLrt, Corey Adams, Mohammad Amin Nabian, Yongming Ding, Sai Krishnan.

Extensibility via .cursor/rules for AI-assisted coding; wiki, discussions active. Updated README guides 'Getting Started' with AI Physics resources, Dev blog link. License headers standardized.

For indie builders: Fork for custom physics (e.g., add zenith-dependent losses); integrate into products like sim accelerators. Small teams gain from pre-built pipelines vs. from-scratch Modulus/NeMo.

"Revise README for PhysicsNeMo resources and guidance Updated the 'Getting Started' section and added new resources for learning AI Physics."

Key Takeaways

  • Clone PhysicsNeMo and run pip install -e .[all] to access 20+ tested Physics-ML models like GraphCast and FIGConvNet.
  • Use YAML configs for training: Set validate_every_n_epochs: 5, save_ckpt_every_n_epochs: 10 in crash example for multi-node validation.
  • Leverage post-v2.0 structure—import from physicsnemo.nn.layers for GNNs, physicsnemo.models for weather forecasters.
  • Fix common pitfalls: Update import paths post-refactor; add jaxtyping for examples; verify distributed tests.
  • Extend for products: Integrate Curator data pipelines, output VTP for mechanics sims, scale via domain-parallel.
  • Monitor issues/PRs for diffusion/multi-node fixes; contribute via pre-commit linting.
  • Start with examples/structural_mechanics/crash/train.py—reproduces production physics ML in <1 hour setup.

Summarized by x-ai/grok-4.1-fast via openrouter

9103 input / 2704 output tokens in 26802ms

© 2026 Edge