XanderJC commited on
Commit
c0a5fcf
·
1 Parent(s): 2945a8b
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -114,6 +114,8 @@ export PROXY_LITE_API_BASE=http://localhost:8008/v1
114
 
115
  ## Scaffolding Proxy Lite in Python
116
 
 
 
117
  The `RunnerConfig` is how you configure the system setup, including the model used.
118
  The library is designed to be modular and extendable, making it easy to swap out the environment, solver, or agent.
119
 
@@ -184,7 +186,7 @@ message_history = [
184
  ```
185
  This would then build up the message history, alternating between the assistant (who takes the *action*) and the user (who provides the *observation*).
186
 
187
- > *Context-window Management:* When making calls to the model, all the last observations other than the current one are discarded in order to reduce the large number of image tokens required. Since the model responses include reflection on the observations and are all included in the message history, the model is still aware of the entire history when planning new actions.
188
 
189
  The chat template will format this automatically. You should also pass the `Tools` that the model has access to, these will define the action space available to the model. You can do this with `transformers`:
190
 
 
114
 
115
  ## Scaffolding Proxy Lite in Python
116
 
117
+ If using the model outside the CLI or streamlit app, you can use the `Runner` class to launch the model in a web-browsing environment.
118
+
119
  The `RunnerConfig` is how you configure the system setup, including the model used.
120
  The library is designed to be modular and extendable, making it easy to swap out the environment, solver, or agent.
121
 
 
186
  ```
187
  This would then build up the message history, alternating between the assistant (who takes the *action*) and the user (who provides the *observation*).
188
 
189
+ > **Context-Window Management:** When making calls to the model, all the last observations other than the current one are discarded in order to reduce the large number of image tokens required. Since the model responses include reflection on the observations and are all included in the message history, the model is still aware of the entire history when planning new actions.
190
 
191
  The chat template will format this automatically. You should also pass the `Tools` that the model has access to, these will define the action space available to the model. You can do this with `transformers`:
192