--- license: mit datasets: - hrishivish23/MPM-Verse-MaterialSim-Small - hrishivish23/MPM-Verse-MaterialSim-Large language: - en metrics: - accuracy pipeline_tag: graph-ml tags: - physics - scientific-ml - lagrangian-dynamics - neural-operator - neural-operator-transformer - graph-neural-networks - graph-transformer - sequence-to-sequence - autoregressive - temporal-dynamics --- # 📌 PhysicsEngine: Reduced-Order Neural Operators for Lagrangian Dynamics **By [Hrishikesh Viswanath](https://huggingface.co/hrishivish23), Yue Chang, Julius Berner, Peter Yichen Chen, Aniket Bera** ![Physics Simulation](https://hrishikeshvish.github.io/projects/giorom_data/giorom_pipeline_plasticine.png) --- ## 📝 Model Overview **GIOROM** is a **Reduced-Order Neural Operator Transformer** designed for **Lagrangian dynamics simulations on highly sparse graphs**. The model enables hybrid **Eulerian-Lagrangian learning** by: - **Projecting Lagrangian inputs onto uniform grids** with a **Graph-Interaction-Operator**. - **Predicting acceleration from sparse velocity inputs** using past time windows with a **Neural Operator Transformer**. - **Learning physics from sparse inputs (n ≩ N)** while allowing reconstruction at arbitrarily dense resolutions via an **Integral Transform Model**. - **Dataset Compatibility**: This model is compatible with [`MPM-Verse-MaterialSim-Small/Sand3DNCLAWSmall`](https://huggingface.co/datasets/hrishivish23/MPM-Verse-MaterialSim-Small/tree/main/Sand3DNCLAWSmall), ⚠ **Note:** While the model can infer using an integral transform, **this repository only provides weights for the time-stepper model that predicts acceleration.** --- ## 📊 Available Model Variants Each variant corresponds to a specific dataset, showcasing the reduction in particle count (n: reduced-order, N: full-order). | Model Name | n (Reduced) | N (Full) | |---------------------------------|------------|---------| | `giorom-3d-t-sand3d-long` | 3.0K | 32K | | `giorom-3d-t-water3d` | 1.7K | 55K | | `giorom-3d-t-elasticity` | 2.6K | 78K | | `giorom-3d-t-plasticine` | 1.1K | 5K | | `giorom-2d-t-water` | 0.12K | 1K | | `giorom-2d-t-sand` | 0.3K | 2K | | `giorom-2d-t-jelly` | 0.2K | 1.9K | | `giorom-2d-t-multimaterial` | 0.25K | 2K | --- ## ðŸ’Ą How It Works ### ðŸ”đ Input Representation The model predicts **acceleration** from past velocity inputs: - **Input Shape:** `[n, D, W]` - `n`: Number of particles (reduced-order, n ≩ N) - `D`: Dimension (2D or 3D) - `W`: Time window (past velocity states) - **Projected to a uniform latent space** of size `[c^D, D]` where: - `c ∈ {8, 16, 32}` - `n - Îīn â‰Ī c^D â‰Ī n + Îīn` This allows the model to generalize physics across different resolutions and discretizations. ### ðŸ”đ Prediction & Reconstruction - The model **learns physical dynamics** on the sparse input representation. - The **integral transform model** reconstructs dense outputs at arbitrary resolutions (not included in this repo). - Enables **highly efficient, scalable simulations** without requiring full-resolution training. --- ## 🚀 Usage Guide ### 1ïļâƒĢ Install Dependencies ```bash pip install transformers huggingface_hub torch ``` ``` git clone https://github.com/HrishikeshVish/GIOROM/ cd GIOROM ``` ### 2ïļâƒĢ Load a Model ```python from models.giorom3d_T import PhysicsEngine from models.config import TimeStepperConfig time_stepper_config = TimeStepperConfig() simulator = PhysicsEngine(time_stepper_config) repo_id = "hrishivish23/giorom-3d-t-sand3d" time_stepper_config = time_stepper_config.from_pretrained(repo_id) simulator = simulator.from_pretrained(repo_id, config=time_stepper_config) ``` ### 3ïļâƒĢ Run Inference ```python import torch ``` --- ## 📂 Model Weights and Checkpoints | Model Name | Model ID | |---------------------------------|-------------| | `giorom-3d-t-sand3d-long` | [`hrishivish23/giorom-3d-t-sand3d-long`](https://huggingface.co/hrishivish23/giorom-3d-t-sand3d-long) | | `giorom-3d-t-water3d` | [`hrishivish23/giorom-3d-t-water3d`](https://huggingface.co/hrishivish23/giorom-3d-t-water3d) | --- ## 📚 Training Details ### 🔧 Hyperparameters - **Graph Interaction Operator** layers: **4** - **Transformer Heads**: **4** - **Embedding Dimension:** **128** - **Latent Grid Sizes:** `{8×8, 16×16, 32×32}` - **Learning Rate:** `1e-4` - **Optimizer:** `Adamax` - **Loss Function:** `MSE + Physics Regularization (Loss computed on Euler integrated outputs)` - **Training Steps:** `1M+ steps` ### ðŸ–Ĩïļ Hardware - **Trained on:** NVIDIA RTX 3050 - **Batch Size:** `2` --- ## 📜 Citation If you use this model, please cite: ```bibtex @article{viswanath2024reduced, title={Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs}, author={Viswanath, Hrishikesh and Chang, Yue and Berner, Julius and Chen, Peter Yichen and Bera, Aniket}, journal={arXiv preprint arXiv:2407.03925}, year={2024} } ``` --- ## 💎 Contact For questions or collaborations: - 🧑‍ðŸ’ŧ Author: [Hrishikesh Viswanath](https://hrishikeshvish.github.io) - 📧 Email: hviswan@purdue.edu - 💎 Hugging Face Discussion: [Model Page](https://huggingface.co/hrishivish23/giorom-3d-t-sand3d-long/discussions) --- ## 🔗 Related Work - **Neural Operators for PDEs**: Fourier Neural Operators, Graph Neural Operators - **Lagrangian Methods**: Material Point Methods, SPH, NCLAW, CROM, LiCROM - **Physics-Based ML**: PINNs, GNS, MeshGraphNet --- ### ðŸ”đ Summary This model is ideal for **fast and scalable physics simulations** where full-resolution computation is infeasible. The reduced-order approach allows **efficient learning on sparse inputs**, with the ability to **reconstruct dense outputs using an integral transform model (not included in this repo).**