deepcs233 commited on
Commit
d51caa2
·
1 Parent(s): d706876

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -1,3 +1,38 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # LMDrive Model Card
6
+
7
+ ## Model details
8
+
9
+ **Model type:**
10
+ LMDrive is an end-to-end, closed-loop, language-based autonomous driving framework, which interacts with the dynamic environment via multi-modal multi-view sensor data and natural language instructions.
11
+
12
+ **Model date:**
13
+ LMDrive-1.0 (based on LLaVA-v1.5-7B) was trained in November 2023.
14
+
15
+ **Paper or resources for more information:**
16
+ Github: https://github.com/opendilab/LMDrive/README.md
17
+ Paper: https://arxiv.org/abs/2312.07488
18
+
19
+ **Related weights for the vision encoder**
20
+ https://huggingface.co/deepcs233/LMDrive-vision-encoder-r50-v1.0
21
+
22
+ **Where to send questions or comments about the model:**
23
+ https://github.com/haotian-liu/LLaVA/issues
24
+
25
+
26
+ ## Intended use
27
+ **Primary intended uses:**
28
+ The primary use of LMDrive is research on large multimodal models for autonomous driving.
29
+
30
+ **Primary intended users:**
31
+ The primary intended users of the model are researchers and hobbyists in computer vision, large multimodal model, autonomous driving, and artificial intelligence.
32
+
33
+ ## Training dataset
34
+ - 64K instruction-sensor-control data clips collected in the CARLA simulator. [dataset_webpage](https://huggingface.co/datasets/deepcs233/LMDrive)
35
+ - where each clip includes one navigation instruction, several notice instructions, a sequence of multi-modal multi-view sensor data, and control signals. The duration of the clip spans from 2 to 20 seconds
36
+
37
+ ## Evaluation benchmark
38
+ LangAuto, LangAuto-short, LangAuto-tiny, LangAuto-notice