Update README.md
Browse files
README.md
CHANGED
|
@@ -5,16 +5,28 @@ base_model: Qwen/Qwen2-VL-7B-Instruct
|
|
| 5 |
pipeline_tag: image-text-to-text
|
| 6 |
---
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
<div align="center">
|
| 11 |
|
| 12 |
-
[\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
|
| 13 |
|
| 14 |
</div>
|
| 15 |
|
|
|
|
| 16 |

|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
## Quick Start
|
| 19 |
OS-Atlas-Base-7B is a GUI grounding model finetuned from [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
|
| 20 |
|
|
@@ -26,6 +38,7 @@ First, ensure that the necessary dependencies are installed:
|
|
| 26 |
pip install transformers
|
| 27 |
pip install qwen-vl-utils
|
| 28 |
```
|
|
|
|
| 29 |
|
| 30 |
Inference code example:
|
| 31 |
```python
|
|
@@ -44,7 +57,7 @@ messages = [
|
|
| 44 |
"content": [
|
| 45 |
{
|
| 46 |
"type": "image",
|
| 47 |
-
"image": "
|
| 48 |
},
|
| 49 |
{"type": "text", "text": "In this UI screenshot, what is the position of the element corresponding to the command \"switch language of current page\" (with bbox)?"},
|
| 50 |
],
|
|
|
|
| 5 |
pipeline_tag: image-text-to-text
|
| 6 |
---
|
| 7 |
|
| 8 |
+
# OS-Atlas: A Foundation Action Model For Generalist GUI Agents
|
| 9 |
|
| 10 |
<div align="center">
|
| 11 |
|
| 12 |
+
[\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045)[\[🤗Data\]](https://huggingface.co/datasets/OS-Copilot/OS-Atlas-data) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
|
| 13 |
|
| 14 |
</div>
|
| 15 |
|
| 16 |
+
## Overview
|
| 17 |

|
| 18 |
|
| 19 |
+
OS-Atlas provides a series of models specifically designed for GUI agents.
|
| 20 |
+
|
| 21 |
+
For GUI grounding tasks, you can use:
|
| 22 |
+
- [OS-Atlas-Base-7B](https://huggingface.co/OS-Copilot/OS-Atlas-Base-7B)
|
| 23 |
+
- [OS-Atlas-Base-4B](https://huggingface.co/OS-Copilot/OS-Atlas-Base-4B)
|
| 24 |
+
|
| 25 |
+
For generating single-step actions in GUI agent tasks, you can use:
|
| 26 |
+
- [OS-Atlas-Pro-7B](https://huggingface.co/OS-Copilot/OS-Atlas-Pro-7B)
|
| 27 |
+
- [OS-Atlas-Pro-4B](https://huggingface.co/OS-Copilot/OS-Atlas-Pro-4B)
|
| 28 |
+
|
| 29 |
+
|
| 30 |
## Quick Start
|
| 31 |
OS-Atlas-Base-7B is a GUI grounding model finetuned from [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
|
| 32 |
|
|
|
|
| 38 |
pip install transformers
|
| 39 |
pip install qwen-vl-utils
|
| 40 |
```
|
| 41 |
+
Then download the [example image](https://github.com/OS-Copilot/OS-Atlas/blob/main/examples/images/web_6f93090a-81f6-489e-bb35-1a2838b18c01.png) and save it to the current directory.
|
| 42 |
|
| 43 |
Inference code example:
|
| 44 |
```python
|
|
|
|
| 57 |
"content": [
|
| 58 |
{
|
| 59 |
"type": "image",
|
| 60 |
+
"image": "./web_6f93090a-81f6-489e-bb35-1a2838b18c01.png",
|
| 61 |
},
|
| 62 |
{"type": "text", "text": "In this UI screenshot, what is the position of the element corresponding to the command \"switch language of current page\" (with bbox)?"},
|
| 63 |
],
|