Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,122 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
- robotics
|
| 5 |
- motion planning
|
| 6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
tags:
|
| 6 |
- robotics
|
| 7 |
- motion planning
|
| 8 |
+
---
|
| 9 |
+
# Neural MP
|
| 10 |
+
|
| 11 |
+
Neural MP is a machine learning-based motion planning system for robotic manipulation tasks. It combines neural networks trained on large-scale simulated data with lightweight optimization techniques to generate efficient, collision-free trajectories. Neural MP is designed to generalize across diverse environments and obstacle configurations, making it suitable for both simulated and real-world robotic applications. This repository contains the model weights for Neural MP.
|
| 12 |
+
|
| 13 |
+
All Neural MP checkpoints, as well as our [training codebase](https://github.com/mihdalal/neuralmotionplanner) are released under an MIT License.
|
| 14 |
+
|
| 15 |
+
For full details, please read our [paper](https://mihdalal.github.io/neuralmotionplanner/resources/paper.pdf) and see [our project page](https://mihdalal.github.io/neuralmotionplanner/).
|
| 16 |
+
|
| 17 |
+
## Model Summary
|
| 18 |
+
- **Developed by:** The Neural MP team consisting of researchers from Carnegie Mellon University.
|
| 19 |
+
- **Language(s) (NLP):** en
|
| 20 |
+
- **License:** MIT
|
| 21 |
+
- **Pretraining Dataset:** Coming soon
|
| 22 |
+
- **Repository:** [https://github.com/mihdalal/neuralmotionplanner](https://github.com/mihdalal/neuralmotionplanner)
|
| 23 |
+
- **Paper:** Coming soon
|
| 24 |
+
- **Project Page & Videos:** [https://mihdalal.github.io/neuralmotionplanner/](https://mihdalal.github.io/neuralmotionplanner/)
|
| 25 |
+
|
| 26 |
+
## Installation
|
| 27 |
+
|
| 28 |
+
Please read [here](https://github.com/mihdalal/neural_mp?tab=readme-ov-file#installation-instructions) for detailed instructions
|
| 29 |
+
|
| 30 |
+
## Usage
|
| 31 |
+
|
| 32 |
+
Neural MP model takes in 3D point cloud and start & goal angles of the Franka robot as input, and predict 7-DoF delta joint actions. We provide a wrapper class [NeuralMP](https://github.com/mihdalal/neural_mp/blob/master/neural_mp/real_utils/neural_motion_planner.py) for inference and deploy our model in the real world.
|
| 33 |
+
|
| 34 |
+
Here's an deployment example with the Manimo Franka control library:
|
| 35 |
+
|
| 36 |
+
Note: using Manimo is not required, you may use other Franka control libraries by creating a wrapper class which inherits from FrankaRealEnv (see [franka_real_env.py](https://github.com/mihdalal/neural_mp/blob/master/neural_mp/envs/franka_real_env.py))
|
| 37 |
+
|
| 38 |
+
```python
|
| 39 |
+
import argparse
|
| 40 |
+
import numpy as np
|
| 41 |
+
from neural_mp.envs.franka_real_env import FrankaRealEnvManimo
|
| 42 |
+
from neural_mp.real_utils.neural_motion_planner import NeuralMP
|
| 43 |
+
if __name__ == "__main__":
|
| 44 |
+
parser = argparse.ArgumentParser()
|
| 45 |
+
parser.add_argument(
|
| 46 |
+
"--mdl_url",
|
| 47 |
+
type=str,
|
| 48 |
+
default="mihdalal/NeuralMP",
|
| 49 |
+
help="hugging face url to load the neural_mp model",
|
| 50 |
+
)
|
| 51 |
+
parser.add_argument(
|
| 52 |
+
"--cache-name",
|
| 53 |
+
type=str,
|
| 54 |
+
default="scene1_single_blcok",
|
| 55 |
+
help="Specify the scene cache file with pcd and rgb data",
|
| 56 |
+
)
|
| 57 |
+
parser.add_argument(
|
| 58 |
+
"--use-cache",
|
| 59 |
+
action="store_true",
|
| 60 |
+
help=("If set, will use pre-stored point clouds"),
|
| 61 |
+
)
|
| 62 |
+
parser.add_argument(
|
| 63 |
+
"--debug-combined-pcd",
|
| 64 |
+
action="store_true",
|
| 65 |
+
help=("If set, will show visualization of the combined pcd"),
|
| 66 |
+
)
|
| 67 |
+
parser.add_argument(
|
| 68 |
+
"--denoise-pcd",
|
| 69 |
+
action="store_true",
|
| 70 |
+
help=("If set, will apply denoising to the pcds"),
|
| 71 |
+
)
|
| 72 |
+
parser.add_argument(
|
| 73 |
+
"--train-mode", action="store_true", help=("If set, will eval with policy in training mode")
|
| 74 |
+
)
|
| 75 |
+
parser.add_argument(
|
| 76 |
+
"--tto", action="store_true", help=("If set, will apply test time optimization")
|
| 77 |
+
)
|
| 78 |
+
parser.add_argument(
|
| 79 |
+
"--in-hand", action="store_true", help=("If set, will enable in hand mode for eval")
|
| 80 |
+
)
|
| 81 |
+
parser.add_argument(
|
| 82 |
+
"--in-hand-params",
|
| 83 |
+
nargs="+",
|
| 84 |
+
type=float,
|
| 85 |
+
default=[0.1, 0.1, 0.1, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 1.0],
|
| 86 |
+
help="Specify the bounding box of the in hand object. 10 params in total [size(xyz), pos(xyz), ori(xyzw)] 3+3+4.",
|
| 87 |
+
)
|
| 88 |
+
args = parser.parse_args()
|
| 89 |
+
env = FrankaRealEnvManimo()
|
| 90 |
+
neural_mp = NeuralMP(
|
| 91 |
+
env=env,
|
| 92 |
+
model_url=args.mdl_url,
|
| 93 |
+
train_mode=args.train_mode,
|
| 94 |
+
in_hand=args.in_hand,
|
| 95 |
+
in_hand_params=args.in_hand_params,
|
| 96 |
+
visualize=True,
|
| 97 |
+
)
|
| 98 |
+
points, colors = neural_mp.get_scene_pcd(
|
| 99 |
+
use_cache=args.use_cache,
|
| 100 |
+
cache_name=args.cache_name,
|
| 101 |
+
debug_combined_pcd=args.debug_combined_pcd,
|
| 102 |
+
denoise=args.denoise_pcd,
|
| 103 |
+
)
|
| 104 |
+
# specify start and goal configurations
|
| 105 |
+
start_config = np.array([-0.538, 0.628, -0.061, -1.750, 0.126, 2.418, 1.610])
|
| 106 |
+
goal_config = np.array([1.067, 0.847, -0.591, -1.627, 0.623, 2.295, 2.580])
|
| 107 |
+
if args.tto:
|
| 108 |
+
trajectory = neural_mp.motion_plan_with_tto(
|
| 109 |
+
start_config=start_config,
|
| 110 |
+
goal_config=goal_config,
|
| 111 |
+
points=points,
|
| 112 |
+
colors=colors,
|
| 113 |
+
)
|
| 114 |
+
else:
|
| 115 |
+
trajectory = neural_mp.motion_plan(
|
| 116 |
+
start_config=start_config,
|
| 117 |
+
goal_config=goal_config,
|
| 118 |
+
points=points,
|
| 119 |
+
colors=colors,
|
| 120 |
+
)
|
| 121 |
+
success, joint_error = neural_mp.execute_motion_plan(trajectory, speed=0.2)
|
| 122 |
+
```
|