Multi-GraspSet / README.md
LHS-LHS's picture
Upload README.md
1b7d584 verified

Multi-GraspLLM: A Multimodal LLM for Multi-Hand Semantic-Guided Grasp Generation

Project Page | arXiv


Updates

  • 2025.3: We add the Jaco hand data which is our final version.
  • 2025.1: We add the Barrett hand data.
  • 2024.12: Released the Multi-GraspSet dataset with meshes of objects.
  • 2024.12: Released the Multi-GraspSet dataset with contact annotations.

Overview

We introduce Multi-GraspSet, the first large-scale multi-hand grasp dataset enriched with automatic contact annotations.

Structure of Multi-GraspLLM

The Construction process of Multi-GraspSet

Structure of Multi-GraspLLM

Visualization of Multi-GraspSet with contact annotations


Installation

Follow these steps to set up the evaluation environment:

  1. Create the Environment

    conda create --name eval python=3.9
    
  2. Install PyTorch and Dependencies

    pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
    

    ⚠️ Ensure the CUDA toolkit version matches your installed PyTorch version.

  3. Install Pytorch Kinematics

    cd ./pytorch_kinematics
    pip install -e .
    
  4. Install Remaining Requirements

    pip install -r requirements_eval.txt
    

Visualize the Dataset

  1. Run the Visualization Code

    Open and execute the vis_mid_dataset.ipynb file to visualize the dataset.