Spaces:
Running
Running
readme
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@
|
|
3 |
<img src="assets/proxy-lite.png" alt="Proxy Lite logo" width="600" height="auto" style="margin-bottom: 20px;" />
|
4 |
|
5 |
<h2>
|
6 |
-
A mini, open-weights, version of
|
7 |
</h2>
|
8 |
|
9 |
|
@@ -112,10 +112,10 @@ or by setting the environment variable:
|
|
112 |
export PROXY_LITE_API_BASE=http://localhost:8008/v1
|
113 |
```
|
114 |
|
115 |
-
|
116 |
|
117 |
-
|
118 |
-
The library is designed to be modular and extendable,
|
119 |
|
120 |
Example:
|
121 |
```python
|
@@ -161,7 +161,7 @@ The `Runner` sets the solver and environment off in a loop, like in a traditiona
|
|
161 |
</div>
|
162 |
|
163 |
|
164 |
-
|
165 |
|
166 |
```python
|
167 |
message_history = [
|
@@ -182,9 +182,11 @@ message_history = [
|
|
182 |
},
|
183 |
]
|
184 |
```
|
185 |
-
This would then build up the message history, alternating between the assistant (action) and the user (
|
186 |
|
187 |
-
|
|
|
|
|
188 |
|
189 |
```python
|
190 |
from qwen_vl_utils import process_vision_info
|
|
|
3 |
<img src="assets/proxy-lite.png" alt="Proxy Lite logo" width="600" height="auto" style="margin-bottom: 20px;" />
|
4 |
|
5 |
<h2>
|
6 |
+
A mini, open-weights, version of <a href="https://proxy.convergence.ai">Proxy</a>.
|
7 |
</h2>
|
8 |
|
9 |
|
|
|
112 |
export PROXY_LITE_API_BASE=http://localhost:8008/v1
|
113 |
```
|
114 |
|
115 |
+
## Scaffolding Proxy Lite in Python
|
116 |
|
117 |
+
The `RunnerConfig` is how you configure the system setup, including the model used.
|
118 |
+
The library is designed to be modular and extendable, making it easy to swap out the environment, solver, or agent.
|
119 |
|
120 |
Example:
|
121 |
```python
|
|
|
161 |
</div>
|
162 |
|
163 |
|
164 |
+
Proxy Lite expects the following message format::
|
165 |
|
166 |
```python
|
167 |
message_history = [
|
|
|
182 |
},
|
183 |
]
|
184 |
```
|
185 |
+
This would then build up the message history, alternating between the assistant (who takes the *action*) and the user (who provides the *observation*).
|
186 |
|
187 |
+
> *Context-window Management:* When making calls to the model, all the last observations other than the current one are discarded in order to reduce the large number of image tokens required. Since the model responses include reflection on the observations and are all included in the message history, the model is still aware of the entire history when planning new actions.
|
188 |
+
|
189 |
+
The chat template will format this automatically. You should also pass the `Tools` that the model has access to, these will define the action space available to the model. You can do this with `transformers`:
|
190 |
|
191 |
```python
|
192 |
from qwen_vl_utils import process_vision_info
|