giorom-3d-t-sand-3d / README.md
hrishivish23's picture
Update README.md
234bf9d verified
metadata
license: mit
datasets:
  - hrishivish23/MPM-Verse-MaterialSim-Small
  - hrishivish23/MPM-Verse-MaterialSim-Large
language:
  - en
metrics:
  - accuracy
pipeline_tag: graph-ml
tags:
  - physics
  - scientific-ml
  - lagrangian-dynamics
  - neural-operator
  - neural-operator-transformer
  - graph-neural-networks
  - graph-transformer
  - sequence-to-sequence
  - autoregressive
  - temporal-dynamics

πŸ“Œ PhysicsEngine: Reduced-Order Neural Operators for Lagrangian Dynamics

By Hrishikesh Viswanath, Yue Chang, Julius Berner, Peter Yichen Chen, Aniket Bera

Physics Simulation


πŸ“ Model Overview

GIOROM is a Reduced-Order Neural Operator Transformer designed for Lagrangian dynamics simulations on highly sparse graphs. The model enables hybrid Eulerian-Lagrangian learning by:

  • Projecting Lagrangian inputs onto uniform grids with a Graph-Interaction-Operator.
  • Predicting acceleration from sparse velocity inputs using past time windows with a Neural Operator Transformer.
  • Learning physics from sparse inputs (n β‰ͺ N) while allowing reconstruction at arbitrarily dense resolutions via an Integral Transform Model.
  • Dataset Compatibility: This model is compatible with MPM-Verse-MaterialSim-Small/Sand3DNCLAWSmall,

⚠ Note: While the model can infer using an integral transform, this repository only provides weights for the time-stepper model that predicts acceleration.


πŸ“Š Available Model Variants

Each variant corresponds to a specific dataset, showcasing the reduction in particle count (n: reduced-order, N: full-order).

Model Name n (Reduced) N (Full)
giorom-3d-t-sand3d-long 3.0K 32K
giorom-3d-t-water3d 1.7K 55K
giorom-3d-t-elasticity 2.6K 78K
giorom-3d-t-plasticine 1.1K 5K
giorom-2d-t-water 0.12K 1K
giorom-2d-t-sand 0.3K 2K
giorom-2d-t-jelly 0.2K 1.9K
giorom-2d-t-multimaterial 0.25K 2K

πŸ’‘ How It Works

πŸ”Ή Input Representation

The model predicts acceleration from past velocity inputs:

  • Input Shape: [n, D, W]

    • n: Number of particles (reduced-order, n β‰ͺ N)
    • D: Dimension (2D or 3D)
    • W: Time window (past velocity states)
  • Projected to a uniform latent space of size [c^D, D] where:

    • c ∈ {8, 16, 32}
    • n - Ξ΄n ≀ c^D ≀ n + Ξ΄n

This allows the model to generalize physics across different resolutions and discretizations.

πŸ”Ή Prediction & Reconstruction

  • The model learns physical dynamics on the sparse input representation.
  • The integral transform model reconstructs dense outputs at arbitrary resolutions (not included in this repo).
  • Enables highly efficient, scalable simulations without requiring full-resolution training.

πŸš€ Usage Guide

1️⃣ Install Dependencies

pip install transformers huggingface_hub torch
git clone https://github.com/HrishikeshVish/GIOROM/
cd GIOROM

2️⃣ Load a Model

from models.giorom3d_T import PhysicsEngine
from models.config import TimeStepperConfig

time_stepper_config = TimeStepperConfig()

simulator = PhysicsEngine(time_stepper_config)
repo_id = "hrishivish23/giorom-3d-t-sand3d"
time_stepper_config = time_stepper_config.from_pretrained(repo_id)
simulator = simulator.from_pretrained(repo_id, config=time_stepper_config)

3️⃣ Run Inference

import torch

πŸ“‚ Model Weights and Checkpoints

Model Name Model ID
giorom-3d-t-sand3d-long hrishivish23/giorom-3d-t-sand3d-long
giorom-3d-t-water3d hrishivish23/giorom-3d-t-water3d

πŸ“š Training Details

πŸ”§ Hyperparameters

  • Graph Interaction Operator layers: 4
  • Transformer Heads: 4
  • Embedding Dimension: 128
  • Latent Grid Sizes: {8Γ—8, 16Γ—16, 32Γ—32}
  • Learning Rate: 1e-4
  • Optimizer: Adamax
  • Loss Function: MSE + Physics Regularization (Loss computed on Euler integrated outputs)
  • Training Steps: 1M+ steps

πŸ–₯️ Hardware

  • Trained on: NVIDIA RTX 3050
  • Batch Size: 2

πŸ“œ Citation

If you use this model, please cite:

@article{viswanath2024reduced,
  title={Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs},
  author={Viswanath, Hrishikesh and Chang, Yue and Berner, Julius and Chen, Peter Yichen and Bera, Aniket},
  journal={arXiv preprint arXiv:2407.03925},
  year={2024}
}

πŸ’¬ Contact

For questions or collaborations:


πŸ”— Related Work

  • Neural Operators for PDEs: Fourier Neural Operators, Graph Neural Operators
  • Lagrangian Methods: Material Point Methods, SPH, NCLAW, CROM, LiCROM
  • Physics-Based ML: PINNs, GNS, MeshGraphNet

πŸ”Ή Summary

This model is ideal for fast and scalable physics simulations where full-resolution computation is infeasible. The reduced-order approach allows efficient learning on sparse inputs, with the ability to reconstruct dense outputs using an integral transform model (not included in this repo).