Alex J. Chan commited on
Commit
ad2c5c0
·
2 Parent(s): 71cd099 710f609

Merge pull request #7 from convergence-ai/alex/updates

Browse files
README.md CHANGED
@@ -3,7 +3,7 @@
3
  <img src="assets/proxy-lite.png" alt="Proxy Lite logo" width="600" height="auto" style="margin-bottom: 20px;" />
4
 
5
  <h2>
6
- A mini, open-weights, version of our Proxy assistant.
7
  </h2>
8
 
9
 
@@ -73,7 +73,7 @@ proxy --help
73
  You can directly run Proxy Lite on a task with:
74
 
75
  ```bash
76
- proxy "Book a table for 2 at an Italian restaurant in Kings Cross tonight at 7pm."
77
  ```
78
 
79
  Alternatively you can run the local web ui with:
@@ -90,7 +90,7 @@ By default, Proxy Lite will point to an endpoint set up on HuggingFace spaces.
90
  We recommend hosting your own endpoint with vLLM, you can use the following command:
91
 
92
  ```bash
93
- vllm serve --model convergence-ai/proxy-lite \
94
  --trust-remote-code \
95
  --enable-auto-tool-choice \
96
  --tool-call-parser hermes \
@@ -112,10 +112,12 @@ or by setting the environment variable:
112
  export PROXY_LITE_API_BASE=http://localhost:8008/v1
113
  ```
114
 
115
- ### Scaffolding Proxy Lite in Python
116
 
117
- We use the `RunnerConfig` to control the setup of the task.
118
- The library is designed to be modular and extendable, you can easily swap the environment, solver, or agent.
 
 
119
 
120
  Example:
121
  ```python
@@ -135,7 +137,7 @@ config = RunnerConfig.from_dict(
135
  "name": "proxy_lite",
136
  "client": {
137
  "name": "convergence",
138
- "model_id": "convergence-ai/proxy-lite",
139
  "api_base": "https://convergence-ai-demo-api.hf.space/v1",
140
  },
141
  },
@@ -161,7 +163,7 @@ The `Runner` sets the solver and environment off in a loop, like in a traditiona
161
  </div>
162
 
163
 
164
- When it comes to prompting Proxy Lite, the model expects a message history of the form:
165
 
166
  ```python
167
  message_history = [
@@ -171,7 +173,7 @@ message_history = [
171
  }, # System prompt
172
  {
173
  "role": "user",
174
- "content": "Book a table for 2 at an Italian restaurant in Kings Cross tonight at 7pm.",
175
  }, # Set the task
176
  {
177
  "role": "user",
@@ -182,9 +184,11 @@ message_history = [
182
  },
183
  ]
184
  ```
185
- This would then build up the message history, alternating between the assistant (action) and the user (observation), although for new calls, all the last observations other than the current one are discarded.
 
 
186
 
187
- The chat template will format this automatically, but also expects the appropriate `Tools` to be passed in so that the model is aware of the available actions. You can do this with `transformers`:
188
 
189
  ```python
190
  from qwen_vl_utils import process_vision_info
@@ -193,7 +197,7 @@ from transformers import AutoProcessor
193
  from proxy_lite.tools import ReturnValueTool, BrowserTool
194
  from proxy_lite.serializer import OpenAICompatableSerializer
195
 
196
- processor = AutoProcessor.from_pretrained("convergence-ai/proxy-lite")
197
  tools = OpenAICompatableSerializer().serialize_tools([ReturnValueTool(), BrowserTool(session=None)])
198
 
199
  templated_messages = processor.apply_chat_template(
@@ -219,7 +223,7 @@ from openai import OpenAI
219
  client = OpenAI(base_url="http://convergence-ai-demo-api.hf.space/v1")
220
 
221
  response = client.chat.completions.create(
222
- model="convergence-ai/proxy-lite",
223
  messages=message_history,
224
  tools=tools,
225
  tool_choice="auto",
@@ -256,10 +260,13 @@ Actions in an environment are defined through available tool calls, which in the
256
  This model has not been designed to act as a full assistant able to interact with a user, instead it acts as a tool that goes out and *autonomously* completes a task.
257
  As such, it will struggle with tasks that require credentials or user interaction such as actually purchasing items if you don't give all the required details in the prompt.
258
 
 
259
 
260
- ## Citation
261
 
262
 
 
 
263
  ```bibtex
264
  @article{proxy-lite,
265
  title={Proxy Lite - A Mini, Open-weights, Autonomous Assistant},
 
3
  <img src="assets/proxy-lite.png" alt="Proxy Lite logo" width="600" height="auto" style="margin-bottom: 20px;" />
4
 
5
  <h2>
6
+ A mini, open-weights, version of <a href="https://proxy.convergence.ai">Proxy</a>.
7
  </h2>
8
 
9
 
 
73
  You can directly run Proxy Lite on a task with:
74
 
75
  ```bash
76
+ proxy "Find some markets near Kings Cross and tell me their ratings."
77
  ```
78
 
79
  Alternatively you can run the local web ui with:
 
90
  We recommend hosting your own endpoint with vLLM, you can use the following command:
91
 
92
  ```bash
93
+ vllm serve --model convergence-ai/proxy-lite-3b \
94
  --trust-remote-code \
95
  --enable-auto-tool-choice \
96
  --tool-call-parser hermes \
 
112
  export PROXY_LITE_API_BASE=http://localhost:8008/v1
113
  ```
114
 
115
+ ## Scaffolding Proxy Lite in Python
116
 
117
+ If using the model outside the CLI or streamlit app, you can use the `Runner` class to launch the model in a web-browsing environment.
118
+
119
+ The `RunnerConfig` is how you configure the system setup, including the model used.
120
+ The library is designed to be modular and extendable, making it easy to swap out the environment, solver, or agent.
121
 
122
  Example:
123
  ```python
 
137
  "name": "proxy_lite",
138
  "client": {
139
  "name": "convergence",
140
+ "model_id": "convergence-ai/proxy-lite-3b",
141
  "api_base": "https://convergence-ai-demo-api.hf.space/v1",
142
  },
143
  },
 
163
  </div>
164
 
165
 
166
+ Proxy Lite expects the following message format:
167
 
168
  ```python
169
  message_history = [
 
173
  }, # System prompt
174
  {
175
  "role": "user",
176
+ "content": "Find some markets near Kings Cross and tell me their ratings.",
177
  }, # Set the task
178
  {
179
  "role": "user",
 
184
  },
185
  ]
186
  ```
187
+ This would then build up the message history, alternating between the assistant (who takes the *action*) and the user (who provides the *observation*).
188
+
189
+ > **Context-Window Management:** When making calls to the model, all the last observations other than the current one are discarded in order to reduce the large number of image tokens required. Since the model responses include reflection on the observations and are all included in the message history, the model is still aware of the entire history when planning new actions.
190
 
191
+ The chat template will format this automatically. You should also pass the `Tools` that the model has access to, these will define the action space available to the model. You can do this with `transformers`:
192
 
193
  ```python
194
  from qwen_vl_utils import process_vision_info
 
197
  from proxy_lite.tools import ReturnValueTool, BrowserTool
198
  from proxy_lite.serializer import OpenAICompatableSerializer
199
 
200
+ processor = AutoProcessor.from_pretrained("convergence-ai/proxy-lite-3b")
201
  tools = OpenAICompatableSerializer().serialize_tools([ReturnValueTool(), BrowserTool(session=None)])
202
 
203
  templated_messages = processor.apply_chat_template(
 
223
  client = OpenAI(base_url="http://convergence-ai-demo-api.hf.space/v1")
224
 
225
  response = client.chat.completions.create(
226
+ model="convergence-ai/proxy-lite-3b",
227
  messages=message_history,
228
  tools=tools,
229
  tool_choice="auto",
 
260
  This model has not been designed to act as a full assistant able to interact with a user, instead it acts as a tool that goes out and *autonomously* completes a task.
261
  As such, it will struggle with tasks that require credentials or user interaction such as actually purchasing items if you don't give all the required details in the prompt.
262
 
263
+ ## Try Proxy
264
 
265
+ Want to try out the full version of Proxy? Visit [proxy.convergence.ai](https://proxy.convergence.ai) to experience the complete, production-ready autonomous assistant with enhanced capabilities, improved reliability, and support for a wider range of tasks.
266
 
267
 
268
+ ## Citation
269
+
270
  ```bibtex
271
  @article{proxy-lite,
272
  title={Proxy Lite - A Mini, Open-weights, Autonomous Assistant},
src/proxy_lite/app.py CHANGED
@@ -28,7 +28,7 @@ def get_user_config(config_expander):
28
  "name": "proxy_lite",
29
  "client": {
30
  "name": "convergence",
31
- "model_id": "convergence-ai/proxy-lite",
32
  "api_base": "https://convergence-ai-demo-api.hf.space/v1",
33
  },
34
  },
 
28
  "name": "proxy_lite",
29
  "client": {
30
  "name": "convergence",
31
+ "model_id": "convergence-ai/proxy-lite-3b",
32
  "api_base": "https://convergence-ai-demo-api.hf.space/v1",
33
  },
34
  },
src/proxy_lite/configs/default.yaml CHANGED
@@ -14,7 +14,7 @@ solver:
14
  name: proxy_lite
15
  client:
16
  name: convergence
17
- model_id: convergence-ai/proxy-lite
18
  api_base: https://convergence-ai-demo-api.hf.space/v1
19
  local_view: true
20
  task_timeout: 1800
 
14
  name: proxy_lite
15
  client:
16
  name: convergence
17
+ model_id: convergence-ai/proxy-lite-3b
18
  api_base: https://convergence-ai-demo-api.hf.space/v1
19
  local_view: true
20
  task_timeout: 1800