source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
content
stringlengths
11
13.3k
GitHub
autogen
autogen/.github/ISSUE_TEMPLATE.md
autogen
### Description <!-- A clear and concise description of the issue or feature request. --> ### Environment - AutoGen version: <!-- Specify the AutoGen version (e.g., v0.2.0) --> - Python version: <!-- Specify the Python version (e.g., 3.8) --> - Operating System: <!-- Specify the OS (e.g., Windows 10, Ubuntu 20.04) --> ### Steps to Reproduce (for bugs) <!-- Provide detailed steps to reproduce the issue. Include code snippets, configuration files, or any other relevant information. --> 1. Step 1 2. Step 2 3. ... ### Expected Behavior <!-- Describe what you expected to happen. --> ### Actual Behavior <!-- Describe what actually happened. Include any error messages, stack traces, or unexpected behavior. --> ### Screenshots / Logs (if applicable) <!-- If relevant, include screenshots or logs that help illustrate the issue. --> ### Additional Information <!-- Include any additional information that might be helpful, such as specific configurations, data samples, or context about the environment. --> ### Possible Solution (if you have one) <!-- If you have suggestions on how to address the issue, provide them here. --> ### Is this a Bug or Feature Request? <!-- Choose one: Bug | Feature Request --> ### Priority <!-- Choose one: High | Medium | Low --> ### Difficulty <!-- Choose one: Easy | Moderate | Hard --> ### Any related issues? <!-- If this is related to another issue, reference it here. --> ### Any relevant discussions? <!-- If there are any discussions or forum threads related to this issue, provide links. --> ### Checklist <!-- Please check the items that you have completed --> - [ ] I have searched for similar issues and didn't find any duplicates. - [ ] I have provided a clear and concise description of the issue. - [ ] I have included the necessary environment details. - [ ] I have outlined the steps to reproduce the issue. - [ ] I have included any relevant logs or screenshots. - [ ] I have indicated whether this is a bug or a feature request. - [ ] I have set the priority and difficulty levels. ### Additional Comments <!-- Any additional comments or context that you think would be helpful. -->
GitHub
autogen
autogen/website/README.md
autogen
# Website This website is built using [Docusaurus 3](https://docusaurus.io/), a modern static website generator.
GitHub
autogen
autogen/website/README.md
autogen
Prerequisites To build and test documentation locally, begin by downloading and installing [Node.js](https://nodejs.org/en/download/), and then installing [Yarn](https://classic.yarnpkg.com/en/). On Windows, you can install via the npm package manager (npm) which comes bundled with Node.js: ```console npm install --global yarn ```
GitHub
autogen
autogen/website/README.md
autogen
Installation ```console pip install pydoc-markdown pyyaml colored cd website yarn install ``` ### Install Quarto `quarto` is used to render notebooks. Install it [here](https://github.com/quarto-dev/quarto-cli/releases). > Note: Ensure that your `quarto` version is `1.5.23` or higher.
GitHub
autogen
autogen/website/README.md
autogen
Local Development Navigate to the `website` folder and run: ```console pydoc-markdown python ./process_notebooks.py render yarn start ``` This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
GitHub
autogen
autogen/website/docs/Migration-Guide.md
autogen
# Migration Guide
GitHub
autogen
autogen/website/docs/Migration-Guide.md
autogen
Migrating to 0.2 openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method. Therefore, some changes are required for users of `pyautogen<0.2`. - `api_base` -> `base_url`, `request_timeout` -> `timeout` in `llm_config` and `config_list`. `max_retry_period` and `retry_wait_time` are deprecated. `max_retries` can be set for each client. - MathChat is unsupported until it is tested in future release. - `autogen.Completion` and `autogen.ChatCompletion` are deprecated. The essential functionalities are moved to `autogen.OpenAIWrapper`: ```python from autogen import OpenAIWrapper client = OpenAIWrapper(config_list=config_list) response = client.create(messages=[{"role": "user", "content": "2+2="}]) print(client.extract_text_or_completion_object(response)) ``` - Inference parameter tuning and inference logging features are updated: ```python import autogen.runtime_logging # Start logging autogen.runtime_logging.start() # Stop logging autogen.runtime_logging.stop() ``` Checkout [Logging documentation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) to learn more. Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). - `seed` in autogen is renamed into `cache_seed` to accommodate the newly added `seed` param in openai chat completion api. `use_cache` is removed as a kwarg in `OpenAIWrapper.create()` for being automatically decided by `cache_seed`: int | None. The difference between autogen's `cache_seed` and openai's `seed` is that: - autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made. - openai's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using openai's `seed` with `cache_seed` set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.
GitHub
autogen
autogen/website/docs/Examples.md
autogen
# Examples
GitHub
autogen
autogen/website/docs/Examples.md
autogen
Automated Multi Agent Chat AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation via multi-agent conversation. Please find documentation about this feature [here](/docs/Use-Cases/agent_chat). Links to notebook examples: ### Code Generation, Execution, and Debugging - Automated Task Solving with Code Generation, Execution & Debugging - [View Notebook](/docs/notebooks/agentchat_auto_feedback_from_code_execution) - Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat) - Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat_qdrant) ### Multi-Agent Collaboration (>3 Agents) - Automated Task Solving by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat) - Automated Data Visualization by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_vis) - Automated Complex Task Solving by Group Chat (with 6 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_research) - Automated Task Solving with Coding & Planning Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_planning.ipynb) - Automated Task Solving with transition paths specified in a graph - [View Notebook](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) - Running a group chat as an inner-monolgue via the SocietyOfMindAgent - [View Notebook](/docs/notebooks/agentchat_society_of_mind) - Running a group chat with custom speaker selection function - [View Notebook](/docs/notebooks/agentchat_groupchat_customized) ### Sequential Multi-Agent Chats - Solving Multiple Tasks in a Sequence of Chats Initiated by a Single Agent - [View Notebook](/docs/notebooks/agentchat_multi_task_chats) - Async-solving Multiple Tasks in a Sequence of Chats Initiated by a Single Agent - [View Notebook](/docs/notebooks/agentchat_multi_task_async_chats) - Solving Multiple Tasks in a Sequence of Chats Initiated by Different Agents - [View Notebook](/docs/notebooks/agentchats_sequential_chats) ### Nested Chats - Solving Complex Tasks with Nested Chats - [View Notebook](/docs/notebooks/agentchat_nestedchat) - Solving Complex Tasks with A Sequence of Nested Chats - [View Notebook](/docs/notebooks/agentchat_nested_sequential_chats) - OptiGuide for Solving a Supply Chain Optimization Problem with Nested Chats with a Coding Agent and a Safeguard Agent - [View Notebook](/docs/notebooks/agentchat_nestedchat_optiguide) - Conversational Chess with Nested Chats and Tool Use - [View Notebook](/docs/notebooks/agentchat_nested_chats_chess) ### Applications - Automated Continual Learning from New Data - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_stream.ipynb) - [OptiGuide](https://github.com/microsoft/optiguide) - Coding, Tool Using, Safeguarding & Question Answering for Supply Chain Optimization - [AutoAnny](https://github.com/microsoft/autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen ### Tool Use - **Web Search**: Solve Tasks Requiring Web Info - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_web_info.ipynb) - Use Provided Tools as Functions - [View Notebook](/docs/notebooks/agentchat_function_call_currency_calculator) - Use Tools via Sync and Async Function Calling - [View Notebook](/docs/notebooks/agentchat_function_call_async) - Task Solving with Langchain Provided Tools as Functions - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_langchain.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG) - Function Inception: Enable AutoGen agents to update/remove functions during conversations. - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_inception_function.ipynb) - Agent Chat with Whisper - [View Notebook](/docs/notebooks/agentchat_video_transcript_translate_with_whisper) - Constrained Responses via Guidance - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb) - Browse the Web with Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_surfer.ipynb) - **SQL**: Natural Language Text to SQL Query using the [Spider](https://yale-lily.github.io/spider) Text-to-SQL Benchmark - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_sql_spider.ipynb) - **Web Scraping**: Web Scraping with Apify - [View Notebook](/docs/notebooks/agentchat_webscraping_with_apify) - **Write a software app, task by task, with specially designed functions.** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_code_writing.ipynb). ### Human Involvement - Simple example in ChatGPT style [View example](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) - Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_human_feedback.ipynb) - Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb) - Agent Chat with **Async Human Inputs** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/Async_human_input.ipynb) ### Agent Teaching and Learning - Teach Agents New Skills & Reuse via Automated Chat - [View Notebook](/docs/notebooks/agentchat_teaching) - Teach Agents New Facts, User Preferences and Skills Beyond Coding - [View Notebook](/docs/notebooks/agentchat_teachability) - Teach OpenAI Assistants Through GPTAssistantAgent - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachable_oai_assistants.ipynb) - Agent Optimizer: Train Agents in an Agentic Way - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb) ### Multi-Agent Chat with OpenAI Assistants in the loop - Hello-World Chat with OpenAi Assistant in AutoGen - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) - Chat with OpenAI Assistant using Function Call - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb) - Chat with OpenAI Assistant with Code Interpreter - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb) - Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) - OpenAI Assistant in a Group Chat - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_groupchat.ipynb) - GPTAssistantAgent based Multi-Agent Tool Use - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/gpt_assistant_agent_function_call.ipynb) ### Non-OpenAI Models - Conversational Chess using non-OpenAI Models - [View Notebook](/docs/notebooks/agentchat_nested_chats_chess_altmodels) ### Multimodal Agent - Multimodal Agent Chat with DALLE and GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_dalle_and_gpt4v.ipynb) - Multimodal Agent Chat with Llava - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) - Multimodal Agent Chat with GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) ### Long Context Handling <!-- - Conversations with Chat History Compression Enabled - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_compression.ipynb) --> - Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages) ### Evaluation and Assessment - AgentEval: A Multi-Agent System for Assess Utility of LLM-powered Applications - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb) ### Automatic Agent Building - Automatically Build Multi-agent System with AgentBuilder - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) - Automatically Build Multi-agent System from Agent Library - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) ### Observability - Track LLM calls, tool usage, actions and errors using AgentOps - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentops.ipynb)
GitHub
autogen
autogen/website/docs/Examples.md
autogen
Enhanced Inferences ### Utilities - API Unification - [View Documentation with Code Example](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification) - Utility Functions to Help Managing API configurations effectively - [View Notebook](/docs/topics/llm_configuration) - Cost Calculation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_cost_token_tracking.ipynb) ### Inference Hyperparameters Tuning AutoGen offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. The research study finds that tuning hyperparameters can significantly improve the utility of them. Please find documentation about this feature [here](/docs/Use-Cases/enhanced_inference). Links to notebook examples: * [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) * [Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
GitHub
autogen
autogen/website/docs/Research.md
autogen
# Research For technical details, please check our technical report and research publications. * [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155). Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang and Chi Wang. ArXiv 2023. ```bibtex @inproceedings{wu2023autogen, title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework}, author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang}, year={2023}, eprint={2308.08155}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` * [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). Chi Wang, Susan Xueqing Liu, Ahmed H. Awadallah. AutoML'23. ```bibtex @inproceedings{wang2023EcoOptiGen, title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference}, author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah}, year={2023}, booktitle={AutoML'23}, } ``` * [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337). Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2306.01337 (2023). ```bibtex @inproceedings{wu2023empirical, title={An Empirical Study on Challenging Math Problem Solving with GPT-4}, author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang}, year={2023}, booktitle={ArXiv preprint arXiv:2306.01337}, } ``` * [EcoAssistant: Using LLM Assistant More Affordably and Accurately](https://arxiv.org/abs/2310.03046). Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang. ArXiv preprint arXiv:2310.03046 (2023). ```bibtex @inproceedings{zhang2023ecoassistant, title={EcoAssistant: Using LLM Assistant More Affordably and Accurately}, author={Zhang, Jieyu and Krishna, Ranjay and Awadallah, Ahmed H and Wang, Chi}, year={2023}, booktitle={ArXiv preprint arXiv:2310.03046}, } ``` * [Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications](https://arxiv.org/abs/2402.09015). Negar Arabzadeh, Julia Kiseleva, Qingyun Wu, Chi Wang, Ahmed Awadallah, Victor Dibia, Adam Fourney, Charles Clarke. ArXiv preprint arXiv:2402.09015 (2024). ```bibtex @misc{Kiseleva2024agenteval, title={Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications}, author={Negar Arabzadeh and Julia Kiseleva and Qingyun Wu and Chi Wang and Ahmed Awadallah and Victor Dibia and Adam Fourney and Charles Clarke}, year={2024}, eprint={2402.09015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` * [Training Language Model Agents without Modifying Language Models](https://arxiv.org/abs/2402.11359). Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, Qingyun Wu. ICML'24. ```bibtex @misc{zhang2024agentoptimizer, title={Training Language Model Agents without Modifying Language Models}, author={Shaokun Zhang and Jieyu Zhang and Jiale Liu and Linxin Song and Chi Wang and Ranjay Krishna and Qingyun Wu}, year={2024}, booktitle={ICML'24}, } ``` * [AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks](https://arxiv.org/abs/2403.04783). Yifan Zeng, Yiran Wu, Xiao Zhang, Huazheng Wang, Qingyun Wu. ArXiv preprint arXiv:2403.04783 (2024). ```bibtex @misc{zeng2024autodefense, title={AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks}, author={Yifan Zeng and Yiran Wu and Xiao Zhang and Huazheng Wang and Qingyun Wu}, year={2024}, eprint={2403.04783}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` * [StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows](https://arxiv.org/abs/2403.11322). Yiran Wu, Tianwei Yue, Shaokun Zhang, Chi Wang, Qingyun Wu. ArXiv preprint arXiv:2403.11322 (2024). ```bibtex @misc{wu2024stateflow, title={StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows}, author={Yiran Wu and Tianwei Yue and Shaokun Zhang and Chi Wang and Qingyun Wu}, year={2024}, eprint={2403.11322}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
# Agent Monitoring and Debugging with AgentOps <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/logo/banner-badge.png?raw=true" style="width: 40%;" alt="AgentOps logo"/> [AgentOps](https://agentops.ai/?=autogen) provides session replays, metrics, and monitoring for AI agents. At a high level, AgentOps gives you the ability to monitor LLM calls, costs, latency, agent failures, multi-agent interactions, tool usage, session-wide statistics, and more. For more info, check out the [AgentOps Repo](https://github.com/AgentOps-AI/agentops). | | | | ------------------------------------- | ------------------------------------------------------------- | | 📊 **Replay Analytics and Debugging** | Step-by-step agent execution graphs | | 💸 **LLM Cost Management** | Track spend with LLM foundation model providers | | 🧪 **Agent Benchmarking** | Test your agents against 1,000+ evals | | 🔐 **Compliance and Security** | Detect common prompt injection and data exfiltration exploits | | 🤝 **Framework Integrations** | Native Integrations with CrewAI, AutoGen, & LangChain | <details open> <summary><b><u>Agent Dashboard</u></b></summary> <a href="https://app.agentops.ai?ref=gh"> <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/app_screenshots/overview.png?raw=true" style="width: 70%;" alt="Agent Dashboard"/> </a> </details> <details> <summary><b><u>Session Analytics</u></b></summary> <a href="https://app.agentops.ai?ref=gh"> <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/app_screenshots/session-overview.png?raw=true" style="width: 70%;" alt="Session Analytics"/> </a> </details> <details> <summary><b><u>Session Replays</u></b></summary> <a href="https://app.agentops.ai?ref=gh"> <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/app_screenshots/session-replay.png?raw=true" style="width: 70%;" alt="Session Replays"/> </a> </details>
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Installation AgentOps works seamlessly with applications built using Autogen. 1. **Install AgentOps** ```bash pip install agentops ``` 2. **Create an API Key:** Create a user API key here: [Create API Key](https://app.agentops.ai/settings/projects) 3. **Configure Your Environment:** Add your API key to your environment variables ``` AGENTOPS_API_KEY=<YOUR_AGENTOPS_API_KEY> ``` 4. **Initialize AgentOps** To start tracking all available data on Autogen runs, simply add two lines of code before implementing Autogen. ```python import agentops agentops.init() # Or: agentops.init(api_key="your-api-key-here") ``` After initializing AgentOps, Autogen will now start automatically tracking your agent runs.
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Features - **LLM Costs**: Track spend with foundation model providers - **Replay Analytics**: Watch step-by-step agent execution graphs - **Recursive Thought Detection**: Identify when agents fall into infinite loops - **Custom Reporting:** Create custom analytics on agent performance - **Analytics Dashboard:** Monitor high level statistics about agents in development and production - **Public Model Testing**: Test your agents against benchmarks and leaderboards - **Custom Tests:** Run your agents against domain specific tests - **Time Travel Debugging**: Save snapshots of session states to rewind and replay agent runs from chosen checkpoints. - **Compliance and Security**: Create audit logs and detect potential threats such as profanity and PII leaks - **Prompt Injection Detection**: Identify potential code injection and secret leaks
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Autogen + AgentOps examples * [AgentChat with AgentOps Notebook](/docs/notebooks/agentchat_agentops) * [More AgentOps Examples](https://docs.agentops.ai/v1/quickstart)
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Extra links - [🐦 Twitter](https://twitter.com/agentopsai/) - [📢 Discord](https://discord.gg/JHPt4C7r) - [🖇️ AgentOps Dashboard](https://app.agentops.ai/ref?=autogen) - [📙 Documentation](https://docs.agentops.ai/introduction)
GitHub
autogen
autogen/website/docs/ecosystem/ollama.md
autogen
# Ollama ![Ollama Example](img/ecosystem-ollama.png) [Ollama](https://ollama.com/) allows the users to run open-source large language models, such as Llama 2, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage. - [Ollama + AutoGen instruction](https://ollama.ai/blog/openai-compatibility)
GitHub
autogen
autogen/website/docs/ecosystem/microsoft-fabric.md
autogen
# Microsoft Fabric ![Fabric Example](img/ecosystem-fabric.png) [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. In this notenook, we give a simple example for using AutoGen in Microsoft Fabric. - [Microsoft Fabric + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_microsoft_fabric.ipynb)
GitHub
autogen
autogen/website/docs/ecosystem/pgvector.md
autogen
# PGVector [PGVector](https://github.com/pgvector/pgvector) is an open-source vector similarity search for Postgres. - [PGVector + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb)
GitHub
autogen
autogen/website/docs/ecosystem/promptflow.md
autogen
# Promptflow Promptflow is a comprehensive suite of tools that simplifies the development, testing, evaluation, and deployment of LLM based AI applications. It also supports integration with Azure AI for cloud-based operations and is designed to streamline end-to-end development. Refer to [Promptflow docs](https://microsoft.github.io/promptflow/) for more information. Quick links: - Why use Promptflow - [Link](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow) - Quick start guide - [Link](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html) - Sample application for Promptflow + AutoGen integration - [Link](https://github.com/microsoft/autogen/tree/main/samples/apps/promptflow-autogen)
GitHub
autogen
autogen/website/docs/ecosystem/promptflow.md
autogen
Sample Flow ![Sample Promptflow](./img/ecosystem-promptflow.png)
GitHub
autogen
autogen/website/docs/ecosystem/composio.md
autogen
# Composio ![Composio Example](img/ecosystem-composio.png) Composio empowers AI agents to seamlessly connect with external tools, Apps, and APIs to perform actions and receive triggers. With built-in support for AutoGen, Composio enables the creation of highly capable and adaptable AI agents that can autonomously execute complex tasks and deliver personalized experiences. - [Composio + AutoGen Documentation with Code Examples](https://docs.composio.dev/framework/autogen)
GitHub
autogen
autogen/website/docs/ecosystem/databricks.md
autogen
# Databricks ![Databricks Data Intelligence Platform](img/ecosystem-databricks.png) The [Databricks Data Intelligence Platform ](https://www.databricks.com/product/data-intelligence-platform) allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. This example demonstrates how to use AutoGen alongside Databricks Foundation Model APIs and open-source LLM DBRX. - [Databricks + AutoGen Code Examples](/docs/notebooks/agentchat_databricks_dbrx)
GitHub
autogen
autogen/website/docs/ecosystem/memgpt.md
autogen
# MemGPT ![MemGPT Example](img/ecosystem-memgpt.png) MemGPT enables LLMs to manage their own memory and overcome limited context windows. You can use MemGPT to create perpetual chatbots that learn about you and modify their own personalities over time. You can connect MemGPT to your own local filesystems and databases, as well as connect MemGPT to your own tools and APIs. The MemGPT + AutoGen integration allows you to equip any AutoGen agent with MemGPT capabilities. - [MemGPT + AutoGen Documentation with Code Examples](https://memgpt.readme.io/docs/autogen)
GitHub
autogen
autogen/website/docs/ecosystem/llamaindex.md
autogen
# Llamaindex ![Llamaindex Example](img/ecosystem-llamaindex.png) [Llamaindex](https://www.llamaindex.ai/) allows the users to create Llamaindex agents and integrate them in autogen conversation patterns. - [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb)
GitHub
autogen
autogen/website/docs/ecosystem/azure_cosmos_db.md
autogen
# Azure Cosmos DB > "OpenAI relies on Cosmos DB to dynamically scale their ChatGPT service – one of the fastest-growing consumer apps ever – enabling high reliability and low maintenance." > – Satya Nadella, Microsoft chairman and chief executive officer Azure Cosmos DB is a fully managed [NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-nosql), [relational](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-relational), and [vector database](https://learn.microsoft.com/azure/cosmos-db/vector-database). It offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Your business continuity is assured with up to 99.999% availability backed by SLA. Your can simplify your application development by using this single database service for all your AI agent memory system needs, from [geo-replicated distributed cache](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to tracing/logging to [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database). Learn more about how Azure Cosmos DB enhances the performance of your [AI agent](https://learn.microsoft.com/en-us/azure/cosmos-db/ai-agents). - [Try Azure Cosmos DB free](https://learn.microsoft.com/en-us/azure/cosmos-db/try-free) - [Use Azure Cosmos DB lifetime free tier](https://learn.microsoft.com/en-us/azure/cosmos-db/free-tier)
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
# Agent Observability AutoGen supports advanced LLM agent observability and monitoring through built-in logging and partner providers.
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
AutoGen Observability Integrations ### Built-In Logging AutoGen's SQLite and File Logger - [Tutorial Notebook](/docs/notebooks/agentchat_logging) ### Full-Service Partner Integrations AutoGen partners with [AgentOps](https://agentops.ai) to provide multi-agent tracking, metrics, and monitoring - [Tutorial Notebook](/docs/notebooks/agentchat_agentops)
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
What is Observability? Observability provides developers with the necessary insights to understand and improve the internal workings of their agents. Observability is necessary for maintaining reliability, tracking costs, and ensuring AI safety. **Without observability tools, developers face significant hurdles:** - Tracking agent activities across sessions becomes a complex, error-prone task. - Manually sifting through verbose terminal outputs to understand LLM interactions is inefficient. - Pinpointing the exact moments of tool invocations is often like finding a needle in a haystack. **Key Features of Observability Dashboards:** - Human-readable overview analytics and replays of agent activities. - LLM cost, prompt, completion, timestamp, and metadata tracking for performance monitoring. - Tool invocation, events, and agent-to-agent interactions for workflow monitoring. - Error flagging and notifications for faster debugging. - Access to a wealth of data for developers using supported agent frameworks, such as environments, SDK versions, and more. ### Compliance Observability is not just a development convenience—it's a compliance necessity, especially in regulated industries: - It offers insights into AI decision-making processes, fostering trust and transparency. - Anomalies and unintended behaviors are detected promptly, reducing various risks. - Ensuring adherence to data privacy regulations, thereby safeguarding sensitive information. - Compliance violations are quickly identified and addressed, enhancing incident management.
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
# Retrieval Augmentation Retrieval Augmented Generation (RAG) is a powerful technique that combines language models with external knowledge retrieval to improve the quality and relevance of generated responses. One way to realize RAG in AutoGen is to construct agent chats with `RetrieveAssistantAgent` and `RetrieveUserProxyAgent` classes.
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Example Setup: RAG with Retrieval Augmented Agents The following is an example setup demonstrating how to create retrieval augmented agents in AutoGen: ### Step 1. Create an instance of `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. Here `RetrieveUserProxyAgent` instance acts as a proxy agent that retrieves relevant information based on the user's input. ```python assistant = RetrieveAssistantAgent( name="assistant", system_message="You are a helpful assistant.", llm_config={ "timeout": 600, "cache_seed": 42, "config_list": config_list, }, ) ragproxyagent = RetrieveUserProxyAgent( name="ragproxyagent", human_input_mode="NEVER", max_consecutive_auto_reply=3, retrieve_config={ "task": "code", "docs_path": [ "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md", "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md", os.path.join(os.path.abspath(""), "..", "website", "docs"), ], "custom_text_types": ["mdx"], "chunk_token_size": 2000, "model": config_list[0]["model"], "client": chromadb.PersistentClient(path="/tmp/chromadb"), "embedding_model": "all-mpnet-base-v2", "get_or_create": True, # set to False if you don't want to reuse an existing collection, but you'll need to remove the collection manually }, code_execution_config=False, # set to False if you don't want to execute the code ) ``` ### Step 2. Initiating Agent Chat with Retrieval Augmentation Once the retrieval augmented agents are set up, you can initiate a chat with retrieval augmentation using the following code: ```python code_problem = "How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached." ragproxyagent.initiate_chat( assistant, message=ragproxyagent.message_generator, problem=code_problem, search_string="spark" ) # search_string is used as an extra filter for the embeddings search, in this case, we only want to search documents that contain "spark". ```
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Example Setup: RAG with Retrieval Augmented Agents with PGVector The following is an example setup demonstrating how to create retrieval augmented agents in AutoGen: ### Step 1. Create an instance of `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. Here `RetrieveUserProxyAgent` instance acts as a proxy agent that retrieves relevant information based on the user's input. Specify the connection_string, or the host, port, database, username, and password in the db_config. ```python assistant = RetrieveAssistantAgent( name="assistant", system_message="You are a helpful assistant.", llm_config={ "timeout": 600, "cache_seed": 42, "config_list": config_list, }, ) ragproxyagent = RetrieveUserProxyAgent( name="ragproxyagent", human_input_mode="NEVER", max_consecutive_auto_reply=3, retrieve_config={ "task": "code", "docs_path": [ "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md", "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md", os.path.join(os.path.abspath(""), "..", "website", "docs"), ], "vector_db": "pgvector", "collection_name": "autogen_docs", "db_config": { "connection_string": "postgresql://testuser:testpwd@localhost:5432/vectordb", # Optional - connect to an external vector database # "host": None, # Optional vector database host # "port": None, # Optional vector database port # "database": None, # Optional vector database name # "username": None, # Optional vector database username # "password": None, # Optional vector database password }, "custom_text_types": ["mdx"], "chunk_token_size": 2000, "model": config_list[0]["model"], "get_or_create": True, }, code_execution_config=False, ) ``` ### Step 2. Initiating Agent Chat with Retrieval Augmentation Once the retrieval augmented agents are set up, you can initiate a chat with retrieval augmentation using the following code: ```python code_problem = "How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached." ragproxyagent.initiate_chat( assistant, message=ragproxyagent.message_generator, problem=code_problem, search_string="spark" ) # search_string is used as an extra filter for the embeddings search, in this case, we only want to search documents that contain "spark". ```
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Online Demo [Retrival-Augmented Chat Demo on Huggingface](https://huggingface.co/spaces/thinkall/autogen-demos)
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
More Examples and Notebooks For more detailed examples and notebooks showcasing the usage of retrieval augmented agents in AutoGen, refer to the following: - Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat) - Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) - Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) - Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG)
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Roadmap Explore our detailed roadmap [here](https://github.com/microsoft/autogen/issues/1657) for further advancements plan around RAG. Your contributions, feedback, and use cases are highly appreciated! We invite you to engage with us and play a pivotal role in the development of this impactful feature.
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
# LLM Caching AutoGen supports caching API requests so that they can be reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving. Since version [`0.2.8`](https://github.com/microsoft/autogen/releases/tag/v0.2.8), a configurable context manager allows you to easily configure LLM cache, using either [`DiskCache`](/docs/reference/cache/disk_cache#diskcache), [`RedisCache`](/docs/reference/cache/redis_cache#rediscache), or Cosmos DB Cache. All agents inside the context manager will use the same cache. ```python from autogen import Cache # Use Redis as cache with Cache.redis(redis_url="redis://localhost:6379/0") as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) # Use DiskCache as cache with Cache.disk() as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) # Use Azure Cosmos DB as cache with Cache.cosmos_db(connection_string="your_connection_string", database_id="your_database_id", container_id="your_container_id") as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) ``` The cache can also be passed directly to the model client's create call. ```python client = OpenAIWrapper(...) with Cache.disk() as cache: client.create(..., cache=cache) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Controlling the seed You can vary the `cache_seed` parameter to get different LLM output while still using cache. ```python # Setting the cache_seed to 1 will use a different cache from the default one # and you will see different output. with Cache.disk(cache_seed=1) as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Cache path By default [`DiskCache`](/docs/reference/cache/disk_cache#diskcache) uses `.cache` for storage. To change the cache directory, set `cache_path_root`: ```python with Cache.disk(cache_path_root="/tmp/autogen_cache") as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Disabling cache For backward compatibility, [`DiskCache`](/docs/reference/cache/disk_cache#diskcache) is on by default with `cache_seed` set to 41. To disable caching completely, set `cache_seed` to `None` in the `llm_config` of the agent. ```python assistant = AssistantAgent( "coding_agent", llm_config={ "cache_seed": None, "config_list": OAI_CONFIG_LIST, "max_tokens": 1024, }, ) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Difference between `cache_seed` and OpenAI's `seed` parameter OpenAI v1.1 introduced a new parameter `seed`. The difference between AutoGen's `cache_seed` and OpenAI's `seed` is AutoGen uses an explicit request cache to guarantee the exactly same output is produced for the same input and when cache is hit, no OpenAI API call will be made. OpenAI's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using OpenAI's `seed` with `cache_seed` set to `None`, even for the same input, an OpenAI API call will be made and there is no guarantee for getting exactly the same output.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
# Non-OpenAI Models AutoGen allows you to use non-OpenAI models through proxy servers that provide an OpenAI-compatible API or a [custom model client](https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models) class. Benefits of this flexibility include access to hundreds of models, assigning specialized models to agents (e.g., fine-tuned coding models), the ability to run AutoGen entirely within your environment, utilising both OpenAI and non-OpenAI models in one system, and cost reductions in inference.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
OpenAI-compatible API proxy server Any proxy server that provides an API that is compatible with [OpenAI's API](https://platform.openai.com/docs/api-reference) will work with AutoGen. These proxy servers can be cloud-based or running locally within your environment. ![Cloud or Local Proxy Servers](images/cloudlocalproxy.png) ### Cloud-based proxy servers By using cloud-based proxy servers, you are able to use models without requiring the hardware and software to run them. These providers can host open source/weight models, like [Hugging Face](https://huggingface.co/) and [Mistral AI](https://mistral.ai/), or their own closed models. When cloud-based proxy servers provide an OpenAI-compatible API, using them in AutoGen is straightforward. With [LLM Configuration](/docs/topics/llm_configuration) done in the same way as when using OpenAI's models, the primary difference is typically the authentication which is usually handled through an API key. Examples of using cloud-based proxy servers providers that have an OpenAI-compatible API are provided below: - [Together AI example](/docs/topics/non-openai-models/cloud-togetherai) - [Mistral AI example](/docs/topics/non-openai-models/cloud-mistralai) - [Anthropic Claude example](/docs/topics/non-openai-models/cloud-anthropic) ### Locally run proxy servers An increasing number of LLM proxy servers are available for use locally. These can be open-source (e.g., LiteLLM, Ollama, vLLM) or closed-source (e.g., LM Studio), and are typically used for running the full-stack within your environment. Similar to cloud-based proxy servers, as long as these proxy servers provide an OpenAI-compatible API, running them in AutoGen is straightforward. Examples of using locally run proxy servers that have an OpenAI-compatible API are provided below: - [LiteLLM with Ollama example](/docs/topics/non-openai-models/local-litellm-ollama) - [LM Studio](/docs/topics/non-openai-models/local-lm-studio) - [vLLM example](/docs/topics/non-openai-models/local-vllm) ````mdx-code-block :::tip If you are planning to use Function Calling, not all cloud-based and local proxy servers support Function Calling with their OpenAI-compatible API, so check their documentation. ::: ```` ### Configuration for Non-OpenAI models Whether you choose a cloud-based or locally-run proxy server, the configuration is done in the same way as using OpenAI's models, see [LLM Configuration](/docs/topics/llm_configuration) for further information. You can use [model configuration filtering](/docs/topics/llm_configuration#config-list-filtering) to assign specific models to agents.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
Custom Model Client class For more advanced users, you can create your own custom model client class, enabling you to define and load your own models. See the [AutoGen with Custom Models: Empowering Users to Use Their Own Inference Mechanism](/blog/2024/01/26/Custom-Models) blog post and [this notebook](/docs/notebooks/agentchat_custom_model/) for a guide to creating custom model client classes.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
# Tips for Non-OpenAI Models Here are some tips for using non-OpenAI Models with AutoGen.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Finding the right model Every model will perform differently across the operations within your AutoGen setup, such as speaker selection, coding, function calling, content creation, etc. On the whole, larger models (13B+) perform better with following directions and providing more cohesive responses. Content creation can be performed by most models. Fine-tuned models can be great for very specific tasks, such as function calling and coding. Specific tasks, such as speaker selection in a Group Chat scenario, that require very accurate outputs can be a challenge with most open source/weight models. The use of chain-of-thought and/or few-shot prompting can help guide the LLM to provide the output in the format you want.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Validating your program Testing your AutoGen setup against a very large LLM, such as OpenAI's ChatGPT or Anthropic's Claude 3, can help validate your agent setup and configuration. Once a setup is performing as you want, you can replace the models for your agents with non-OpenAI models and iteratively tweak system messages, prompts, and model selection.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Chat template AutoGen utilises a set of chat messages for the conversation between AutoGen/user and LLMs. Each chat message has a role attribute that is typically `user`, `assistant`, or `system`. A chat template is applied during inference and some chat templates implement rules about what roles can be used in specific sequences of messages. For example, when using Mistral AI's API the last chat message must have a role of `user`. In a Group Chat scenario the message used to select the next speaker will have a role of `system` by default and the API will throw an exception for this step. To overcome this the GroupChat's constructor has a parameter called `role_for_select_speaker_messages` that can be used to change the role name to `user`. ```python groupchat = autogen.GroupChat( agents=[user_proxy, coder, pm], messages=[], max_round=12, # Role for select speaker message will be set to 'user' instead of 'system' role_for_select_speaker_messages='user', ) ``` If the chat template associated with a model you want to use doesn't support the role sequence and names used in AutoGen you can modify the chat template. See an example of this on our [vLLM page](/docs/topics/non-openai-models/local-vllm#chat-template).
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Discord Join AutoGen's [#alt-models](https://discord.com/channels/1153072414184452236/1201369716057440287) channel on their Discord and discuss non-OpenAI models and configurations.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
# vLLM [vLLM](https://github.com/vllm-project/vllm) is a locally run proxy and inference server, providing an OpenAI-compatible API. As it performs both the proxy and the inferencing, you don't need to install an additional inference server. Note: vLLM does not support OpenAI's [Function Calling](https://platform.openai.com/docs/guides/function-calling) (usable with AutoGen). However, it is in development and may be available by the time you read this. Running this stack requires the installation of: 1. AutoGen ([installation instructions](/docs/installation)) 2. vLLM Note: We recommend using a virtual environment for your stack, see [this article](https://microsoft.github.io/autogen/docs/installation/#create-a-virtual-environment-optional) for guidance.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Installing vLLM In your terminal: ```bash pip install vllm ```
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Choosing models vLLM will download new models when you run the server. The models are sourced from [Hugging Face](https://huggingface.co), a filtered list of Text Generation models is [here](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending) and vLLM has a list of [commonly used models](https://docs.vllm.ai/en/latest/models/supported_models.html). Use the full model name, e.g. `mistralai/Mistral-7B-Instruct-v0.2`.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Chat Template vLLM uses a pre-defined chat template, unless the model has a chat template defined in its config file on Hugging Face. This can cause an issue if the chat template doesn't allow `'role' : 'system'` messages, as used in AutoGen. Therefore, we will create a chat template for the Mistral.AI Mistral 7B model we are using that allows roles of 'user', 'assistant', and 'system'. Create a file name `autogenmistraltemplate.jinja` with the following content: ```` text {{ bos_token }} {% for message in messages %} {% if ((message['role'] == 'user' or message['role'] == 'system') != (loop.index0 % 2 == 0)) %} {{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }} {% endif %} {% if (message['role'] == 'user' or message['role'] == 'system') %} {{ '[INST] ' + message['content'] + ' [/INST]' }} {% elif message['role'] == 'assistant' %} {{ message['content'] + eos_token}} {% else %} {{ raise_exception('Only system, user and assistant roles are supported!') }} {% endif %} {% endfor %} ```` ````mdx-code-block :::warning Chat Templates are specific to the model/model family. The example shown here is for Mistral-based models like Mistral 7B and Mixtral 8x7B. vLLM has a number of [example templates](https://github.com/vllm-project/vllm/tree/main/examples) for models that can be a starting point for your chat template. Just remember, the template may need to be adjusted to support 'system' role messages. ::: ````
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Running vLLM proxy server To run vLLM with the chosen model and our chat template, in your terminal: ```bash python -m vllm.entrypoints.openai.api_server --model mistralai/Mistral-7B-Instruct-v0.2 --chat-template autogenmistraltemplate.jinja ``` By default, vLLM will run on 'http://0.0.0.0:8000'.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Using vLLM with AutoGen Now that we have the URL for the vLLM proxy server, you can use it within AutoGen in the same way as OpenAI or cloud-based proxy servers. As you are running this proxy server locally, no API key is required. As ```api_key``` is a mandatory field for configurations within AutoGen we put a dummy value in it, as per the example below. Although we are specifying the model when running the vLLM command, we must still put it into the ```model``` value for vLLM. ```python from autogen import UserProxyAgent, ConversableAgent local_llm_config={ "config_list": [ { "model": "mistralai/Mistral-7B-Instruct-v0.2", # Same as in vLLM command "api_key": "NotRequired", # Not needed "base_url": "http://0.0.0.0:8000/v1" # Your vLLM URL, with '/v1' added } ], "cache_seed": None # Turns off caching, useful for testing different models } # Create the agent that uses the LLM. assistant = ConversableAgent("agent", llm_config=local_llm_config,system_message="") # Create the agent that represents the user in the conversation. user_proxy = UserProxyAgent("user", code_execution_config=False,system_message="") # Let the assistant start the conversation. It will end when the user types exit. assistant.initiate_chat(user_proxy, message="How can I help you today?") ``` Output: ```` text agent (to user): How can I help you today? -------------------------------------------------------------------------------- Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: Why is the sky blue? user (to agent): Why is the sky blue? -------------------------------------------------------------------------------- >>>>>>>> USING AUTO REPLY... agent (to user): The sky appears blue due to a phenomenon called Rayleigh scattering. As sunlight reaches Earth's atmosphere, it interacts with molecules and particles in the air, causing the scattering of light. Blue light has a shorter wavelength and gets scattered more easily than other colors, which is why the sky appears blue during a clear day. However, during sunrise and sunset, the sky can appear red, orange, or purple due to a different type of scattering called scattering by dust, pollutants, and water droplets, which scatter longer wavelengths of light more effectively. -------------------------------------------------------------------------------- Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: and why does it turn red? user (to agent): and why does it turn red? -------------------------------------------------------------------------------- >>>>>>>> USING AUTO REPLY... agent (to user): During sunrise and sunset, the angle of the sun's rays in the sky is lower, and they have to pass through more of the Earth's atmosphere before reaching an observer. This additional distance results in more scattering of sunlight, which preferentially scatters the longer wavelengths (red, orange, and yellow) more than the shorter wavelengths (blue and green). The scattering of sunlight by the Earth's atmosphere causes the red, orange, and yellow colors to be more prevalent in the sky during sunrise and sunset, resulting in the beautiful display of colors often referred to as a sunrise or sunset. As the sun continues to set, the sky can transition to various shades of purple, pink, and eventually dark blue or black, as the available sunlight continues to decrease and the longer wavelengths are progressively scattered less effectively. -------------------------------------------------------------------------------- Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: exit ````
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md
autogen
# Introduction to Transform Messages Why do we need to handle long contexts? The problem arises from several constraints and requirements: 1. Token limits: LLMs have token limits that restrict the amount of textual data they can process. If we exceed these limits, we may encounter errors or incur additional costs. By preprocessing the chat history, we can ensure that we stay within the acceptable token range. 2. Context relevance: As conversations progress, retaining the entire chat history may become less relevant or even counterproductive. Keeping only the most recent and pertinent messages can help the LLMs focus on the most crucial context, leading to more accurate and relevant responses. 3. Efficiency: Processing long contexts can consume more computational resources, leading to slower response times.
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md
autogen
Transform Messages Capability The `TransformMessages` capability is designed to modify incoming messages before they are processed by the LLM agent. This can include limiting the number of messages, truncating messages to meet token limits, and more. :::info Requirements Install `pyautogen`: ```bash pip install pyautogen ``` For more information, please refer to the [installation guide](/docs/installation/). ::: ### Exploring and Understanding Transformations Let's start by exploring the available transformations and understanding how they work. We will start off by importing the required modules. ```python import copy import pprint from autogen.agentchat.contrib.capabilities import transforms ``` #### Example 1: Limiting the Total Number of Messages Consider a scenario where you want to limit the context history to only the most recent messages to maintain efficiency and relevance. You can achieve this with the MessageHistoryLimiter transformation: ```python # Limit the message history to the 3 most recent messages max_msg_transfrom = transforms.MessageHistoryLimiter(max_messages=3) messages = [ {"role": "user", "content": "hello"}, {"role": "assistant", "content": [{"type": "text", "text": "there"}]}, {"role": "user", "content": "how"}, {"role": "assistant", "content": [{"type": "text", "text": "are you doing?"}]}, {"role": "user", "content": "very very very very very very long string"}, ] processed_messages = max_msg_transfrom.apply_transform(copy.deepcopy(messages)) pprint.pprint(processed_messages) ``` ```console [{'content': 'how', 'role': 'user'}, {'content': [{'text': 'are you doing?', 'type': 'text'}], 'role': 'assistant'}, {'content': 'very very very very very very long string', 'role': 'user'}] ``` By applying the `MessageHistoryLimiter`, we can see that we were able to limit the context history to the 3 most recent messages. #### Example 2: Limiting the Number of Tokens To adhere to token limitations, use the `MessageTokenLimiter` transformation. This limits tokens per message and the total token count across all messages. Additionally, a `min_tokens` threshold can be applied: ```python # Limit the token limit per message to 3 tokens token_limit_transform = transforms.MessageTokenLimiter(max_tokens_per_message=3, min_tokens=10) processed_messages = token_limit_transform.apply_transform(copy.deepcopy(messages)) pprint.pprint(processed_messages) ``` ```console [{'content': 'hello', 'role': 'user'}, {'content': [{'text': 'there', 'type': 'text'}], 'role': 'assistant'}, {'content': 'how', 'role': 'user'}, {'content': [{'text': 'are you doing', 'type': 'text'}], 'role': 'assistant'}, {'content': 'very very very', 'role': 'user'}] ``` We can see that we were able to limit the number of tokens to 3, which is equivalent to 3 words for this instance. In the following example we will explore the effect of the `min_tokens` threshold. ```python short_messages = [ {"role": "user", "content": "hello there, how are you?"}, {"role": "assistant", "content": [{"type": "text", "text": "hello"}]}, ] processed_short_messages = token_limit_transform.apply_transform(copy.deepcopy(short_messages)) pprint.pprint(processed_short_messages) ``` ```console [{'content': 'hello there, how are you?', 'role': 'user'}, {'content': [{'text': 'hello', 'type': 'text'}], 'role': 'assistant'}] ``` We can see that no transformation was applied, because the threshold of 10 total tokens was not reached. ### Apply Transformations Using Agents So far, we have only tested the `MessageHistoryLimiter` and `MessageTokenLimiter` transformations individually, let's test these transformations with AutoGen's agents. #### Setting Up the Stage ```python import os import copy import autogen from autogen.agentchat.contrib.capabilities import transform_messages, transforms from typing import Dict, List config_list = [{"model": "gpt-3.5-turbo", "api_key": os.getenv("OPENAI_API_KEY")}] # Define your agent; the user proxy and an assistant assistant = autogen.AssistantAgent( "assistant", llm_config={"config_list": config_list}, ) user_proxy = autogen.UserProxyAgent( "user_proxy", human_input_mode="NEVER", is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""), max_consecutive_auto_reply=10, ) ``` :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: We first need to write the `test` function that creates a very long chat history by exchanging messages between an assistant and a user proxy agent, and then attempts to initiate a new chat without clearing the history, potentially triggering an error due to token limits. ```python # Create a very long chat history that is bound to cause a crash for gpt 3.5 def test(assistant: autogen.ConversableAgent, user_proxy: autogen.UserProxyAgent): for _ in range(1000): # define a fake, very long messages assitant_msg = {"role": "assistant", "content": "test " * 1000} user_msg = {"role": "user", "content": ""} assistant.send(assitant_msg, user_proxy, request_reply=False, silent=True) user_proxy.send(user_msg, assistant, request_reply=False, silent=True) try: user_proxy.initiate_chat(assistant, message="plot and save a graph of x^2 from -10 to 10", clear_history=False) except Exception as e: print(f"Encountered an error with the base assistant: \n{e}") ``` The first run will be the default implementation, where the agent does not have the `TransformMessages` capability. ```python test(assistant, user_proxy) ``` Running this test will result in an error due to the large number of tokens sent to OpenAI's gpt 3.5. ```console user_proxy (to assistant): plot and save a graph of x^2 from -10 to 10 -------------------------------------------------------------------------------- Encountered an error with the base assistant Error code: 429 - {'error': {'message': 'Request too large for gpt-3.5-turbo in organization org-U58JZBsXUVAJPlx2MtPYmdx1 on tokens per min (TPM): Limit 60000, Requested 1252546. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}} ``` Now let's add the `TransformMessages` capability to the assistant and run the same test. ```python context_handling = transform_messages.TransformMessages( transforms=[ transforms.MessageHistoryLimiter(max_messages=10), transforms.MessageTokenLimiter(max_tokens=1000, max_tokens_per_message=50, min_tokens=500), ] ) context_handling.add_to_agent(assistant) test(assistant, user_proxy) ``` The following console output shows that the agent is now able to handle the large number of tokens sent to OpenAI's gpt 3.5. `````console user_proxy (to assistant): plot and save a graph of x^2 from -10 to 10 -------------------------------------------------------------------------------- Truncated 3804 tokens. Tokens reduced from 4019 to 215 assistant (to user_proxy): To plot and save a graph of \( x^2 \) from -10 to 10, we can use Python with the matplotlib library. Here's the code to generate the plot and save it to a file named "plot.png": ```python # filename: plot_quadratic.py import matplotlib.pyplot as plt import numpy as np # Create an array of x values from -10 to 10 x = np.linspace(-10, 10, 100) y = x**2 # Plot the graph plt.plot(x, y) plt.xlabel('x') plt.ylabel('x^2') plt.title('Plot of x^2') plt.grid(True) # Save the plot as an image file plt.savefig('plot.png') # Display the plot plt.show() ```` You can run this script in a Python environment. It will generate a plot of \( x^2 \) from -10 to 10 and save it as "plot.png" in the same directory where the script is executed. Execute the Python script to create and save the graph. After executing the code, you should see a file named "plot.png" in the current directory, containing the graph of \( x^2 \) from -10 to 10. You can view this file to see the plotted graph. Is there anything else you would like to do or need help with? If not, you can type "TERMINATE" to end our conversation. --- ````` ### Create Custom Transformations to Handle Sensitive Content You can create custom transformations by implementing the `MessageTransform` protocol, which provides flexibility to handle various use cases. One practical application is to create a custom transformation that redacts sensitive information, such as API keys, passwords, or personal data, from the chat history or logs. This ensures that confidential data is not inadvertently exposed, enhancing the security and privacy of your conversational AI system. We will demonstrate this by implementing a custom transformation called `MessageRedact` that detects and redacts OpenAI API keys from the conversation history. This transformation is particularly useful when you want to prevent accidental leaks of API keys, which could compromise the security of your system. ```python import os import pprint import copy import re import autogen from autogen.agentchat.contrib.capabilities import transform_messages, transforms from typing import Dict, List # The transform must adhere to transform_messages.MessageTransform protocol. class MessageRedact: def __init__(self): self._openai_key_pattern = r"sk-([a-zA-Z0-9]{48})" self._replacement_string = "REDACTED" def apply_transform(self, messages: List[Dict]) -> List[Dict]: temp_messages = copy.deepcopy(messages) for message in temp_messages: if isinstance(message["content"], str): message["content"] = re.sub(self._openai_key_pattern, self._replacement_string, message["content"]) elif isinstance(message["content"], list): for item in message["content"]: if item["type"] == "text": item["text"] = re.sub(self._openai_key_pattern, self._replacement_string, item["text"]) return temp_messages def get_logs(self, pre_transform_messages: List[Dict], post_transform_messages: List[Dict]) -> Tuple[str, bool]: keys_redacted = self._count_redacted(post_transform_messages) - self._count_redacted(pre_transform_messages) if keys_redacted > 0: return f"Redacted {keys_redacted} OpenAI API keys.", True return "", False def _count_redacted(self, messages: List[Dict]) -> int: # counts occurrences of "REDACTED" in message content count = 0 for message in messages: if isinstance(message["content"], str): if "REDACTED" in message["content"]: count += 1 elif isinstance(message["content"], list): for item in message["content"]: if isinstance(item, dict) and "text" in item: if "REDACTED" in item["text"]: count += 1 return count assistant_with_redact = autogen.AssistantAgent( "assistant", llm_config=llm_config, max_consecutive_auto_reply=1, ) redact_handling = transform_messages.TransformMessages(transforms=[MessageRedact()]) redact_handling.add_to_agent(assistant_with_redact) user_proxy = autogen.UserProxyAgent( "user_proxy", human_input_mode="NEVER", max_consecutive_auto_reply=1, ) messages = [ {"content": "api key 1 = sk-7nwt00xv6fuegfu3gnwmhrgxvuc1cyrhxcq1quur9zvf05fy"}, # Don't worry, the key is randomly generated {"content": [{"type": "text", "text": "API key 2 = sk-9wi0gf1j2rz6utaqd3ww3o6c1h1n28wviypk7bd81wlj95an"}]}, ] for message in messages: user_proxy.send(message, assistant_with_redact, request_reply=False, silent=True) result = user_proxy.initiate_chat( assistant_with_redact, message="What are the two API keys that I just provided", clear_history=False ``` ```console user_proxy (to assistant): What are the two API keys that I just provided -------------------------------------------------------------------------------- Redacted 2 OpenAI API keys. assistant (to user_proxy): As an AI, I must inform you that it is not safe to share API keys publicly as they can be used to access your private data or services that can incur costs. Given that you've typed "REDACTED" instead of the actual keys, it seems you are aware of the privacy concerns and are likely testing my response or simulating an exchange without exposing real credentials, which is a good practice for privacy and security reasons. To respond directly to your direct question: The two API keys you provided are both placeholders indicated by the text "REDACTED", and not actual API keys. If these were real keys, I would have reiterated the importance of keeping them secure and would not display them here. Remember to keep your actual API keys confidential to prevent unauthorized use. If you've accidentally exposed real API keys, you should revoke or regenerate them as soon as possible through the corresponding service's API management console. -------------------------------------------------------------------------------- user_proxy (to assistant): -------------------------------------------------------------------------------- Redacted 2 OpenAI API keys. ```
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
# Compressing Text with LLMLingua Text compression is crucial for optimizing interactions with LLMs, especially when dealing with long prompts that can lead to higher costs and slower response times. LLMLingua is a tool designed to compress prompts effectively, enhancing the efficiency and cost-effectiveness of LLM operations. This guide introduces LLMLingua's integration with AutoGen, demonstrating how to use this tool to compress text, thereby optimizing the usage of LLMs for various applications. :::info Requirements Install `pyautogen[long-context]` and `PyMuPDF`: ```bash pip install "pyautogen[long-context]" PyMuPDF ``` For more information, please refer to the [installation guide](/docs/installation/). :::
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 1: Compressing AutoGen Research Paper using LLMLingua We will look at how we can use `TextMessageCompressor` to compress an AutoGen research paper using `LLMLingua`. Here's how you can initialize `TextMessageCompressor` with LLMLingua, a text compressor that adheres to the `TextCompressor` protocol. ```python import tempfile import fitz # PyMuPDF import requests from autogen.agentchat.contrib.capabilities.text_compressors import LLMLingua from autogen.agentchat.contrib.capabilities.transforms import TextMessageCompressor AUTOGEN_PAPER = "https://arxiv.org/pdf/2308.08155" def extract_text_from_pdf(): # Download the PDF response = requests.get(AUTOGEN_PAPER) response.raise_for_status() # Ensure the download was successful text = "" # Save the PDF to a temporary file with tempfile.TemporaryDirectory() as temp_dir: with open(temp_dir + "temp.pdf", "wb") as f: f.write(response.content) # Open the PDF with fitz.open(temp_dir + "temp.pdf") as doc: # Read and extract text from each page for page in doc: text += page.get_text() return text # Example usage pdf_text = extract_text_from_pdf() llm_lingua = LLMLingua() text_compressor = TextMessageCompressor(text_compressor=llm_lingua) compressed_text = text_compressor.apply_transform([{"content": pdf_text}]) print(text_compressor.get_logs([], [])) ``` ```console ('19765 tokens saved with text compression.', True) ```
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 2: Integrating LLMLingua with `ConversableAgent` Now, let's integrate `LLMLingua` into a conversational agent within AutoGen. This allows dynamic compression of prompts before they are sent to the LLM. ```python import os import autogen from autogen.agentchat.contrib.capabilities import transform_messages system_message = "You are a world class researcher." config_list = [{"model": "gpt-4-turbo", "api_key": os.getenv("OPENAI_API_KEY")}] # Define your agent; the user proxy and an assistant researcher = autogen.ConversableAgent( "assistant", llm_config={"config_list": config_list}, max_consecutive_auto_reply=1, system_message=system_message, human_input_mode="NEVER", ) user_proxy = autogen.UserProxyAgent( "user_proxy", human_input_mode="NEVER", is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""), max_consecutive_auto_reply=1, ) ``` :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: ```python context_handling = transform_messages.TransformMessages(transforms=[text_compressor]) context_handling.add_to_agent(researcher) message = "Summarize this research paper for me, include the important information" + pdf_text result = user_proxy.initiate_chat(recipient=researcher, clear_history=True, message=message, silent=True) print(result.chat_history[1]["content"]) ``` ```console 19953 tokens saved with text compression. The paper describes AutoGen, a framework designed to facilitate the development of diverse large language model (LLM) applications through conversational multi-agent systems. The framework emphasizes customization and flexibility, enabling developers to define agent interaction behaviors in natural language or computer code. Key components of AutoGen include: 1. **Conversable Agents**: These are customizable agents designed to operate autonomously or through human interaction. They are capable of initiating, maintaining, and responding within conversations, contributing effectively to multi-agent dialogues. 2. **Conversation Programming**: AutoGen introduces a programming paradigm centered around conversational interactions among agents. This approach simplifies the development of complex applications by streamlining how agents communicate and interact, focusing on conversational logic rather than traditional coding for mats. 3. **Agent Customization and Flexibility**: Developers have the freedom to define the capabilities and behaviors of agents within the system, allowing for a wide range of applications across different domains. 4. **Application Versatility**: The paper outlines various use cases from mathematics and coding to decision-making and entertainment, demonstrating AutoGen's ability to cope with a broad spectrum of complexities and requirements. 5. **Hierarchical and Joint Chat Capabilities**: The system supports complex conversation patterns including hierarchical and multi-agent interactions, facilitating robust dialogues that can dynamically adjust based on the conversation context and the agents' roles. 6. **Open-source and Community Engagement**: AutoGen is presented as an open-source framework, inviting contributions and adaptations from the global development community to expand its capabilities and applications. The framework's architecture is designed so that it can be seamlessly integrated into existing systems, providing a robust foundation for developing sophisticated multi-agent applications that leverage the capabilities of modern LLMs. The paper also discusses potential ethical considerations and future improvements, highlighting the importance of continual development in response to evolving tech landscapes and user needs. ```
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 3: Modifying LLMLingua's Compression Parameters LLMLingua's flexibility allows for various configurations, such as customizing instructions for the LLM or setting specific token counts for compression. This example demonstrates how to set a target token count, enabling the use of models with smaller context sizes like gpt-3.5. ```python config_list = [{"model": "gpt-3.5-turbo", "api_key": os.getenv("OPENAI_API_KEY")}] researcher = autogen.ConversableAgent( "assistant", llm_config={"config_list": config_list}, max_consecutive_auto_reply=1, system_message=system_message, human_input_mode="NEVER", ) text_compressor = TextMessageCompressor( text_compressor=llm_lingua, compression_params={"target_token": 13000}, cache=None, ) context_handling = transform_messages.TransformMessages(transforms=[text_compressor]) context_handling.add_to_agent(researcher) compressed_text = text_compressor.apply_transform([{"content": message}]) result = user_proxy.initiate_chat(recipient=researcher, clear_history=True, message=message, silent=True) print(result.chat_history[1]["content"]) ``` ```console 25308 tokens saved with text compression. Based on the extensive research paper information provided, it seems that the focus is on developing a framework called AutoGen for creating multi-agent conversations based on Large Language Models (LLMs) for a variety of applications such as math problem solving, coding, decision-making, and more. The paper discusses the importance of incorporating diverse roles of LLMs, human inputs, and tools to enhance the capabilities of the conversable agents within the AutoGen framework. It also delves into the effectiveness of different systems in various scenarios, showcases the implementation of AutoGen in pilot studies, and compares its performance with other systems in tasks like math problem-solving, coding, and decision-making. The paper also highlights the different features and components of AutoGen such as the AssistantAgent, UserProxyAgent, ExecutorAgent, and GroupChatManager, emphasizing its flexibility, ease of use, and modularity in managing multi-agent interactions. It presents case analyses to demonstrate the effectiveness of AutoGen in various applications and scenarios. Furthermore, the paper includes manual evaluations, scenario testing, code examples, and detailed comparisons with other systems like ChatGPT, OptiGuide, MetaGPT, and more, to showcase the performance and capabilities of the AutoGen framework. Overall, the research paper showcases the potential of AutoGen in facilitating dynamic multi-agent conversations, enhancing decision-making processes, and improving problem-solving tasks with the integration of LLMs, human inputs, and tools in a collaborative framework. ```
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
# Agent Backed by OpenAI Assistant API The GPTAssistantAgent is a powerful component of the AutoGen framework, utilizing OpenAI's Assistant API to enhance agents with advanced capabilities. This agent enables the integration of multiple tools such as the Code Interpreter, File Search, and Function Calling, allowing for a highly customizable and dynamic interaction model. Version Requirements: - AutoGen: Version 0.2.27 or higher. - OpenAI: Version 1.21 or higher. Key Features of the GPTAssistantAgent: - Multi-Tool Mastery: Agents can leverage a combination of OpenAI's built-in tools, like [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) and [File Search](https://platform.openai.com/docs/assistants/tools/file-search), alongside custom tools you create or integrate via [Function Calling](https://platform.openai.com/docs/assistants/tools/function-calling). - Streamlined Conversation Management: Benefit from persistent threads that automatically store message history and adjust based on the model's context length. This simplifies development by allowing you to focus on adding new messages rather than managing conversation flow. - File Access and Integration: Enable agents to access and utilize files in various formats. Files can be incorporated during agent creation or throughout conversations via threads. Additionally, agents can generate files (e.g., images, spreadsheets) and cite referenced files within their responses. For a practical illustration, here are some examples: - [Chat with OpenAI Assistant using function call](/docs/notebooks/agentchat_oai_assistant_function_call) demonstrates how to leverage function calling to enable intelligent function selection. - [GPTAssistant with Code Interpreter](/docs/notebooks/agentchat_oai_code_interpreter) showcases the integration of the Code Interpreter tool which executes Python code dynamically within applications. - [Group Chat with GPTAssistantAgent](/docs/notebooks/agentchat_oai_assistant_groupchat) demonstrates how to use the GPTAssistantAgent in AutoGen's group chat mode, enabling collaborative task performance through automated chat with agents powered by LLMs, tools, or humans.
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
Create a OpenAI Assistant in Autogen ```python import os from autogen import config_list_from_json from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent assistant_id = os.environ.get("ASSISTANT_ID", None) config_list = config_list_from_json("OAI_CONFIG_LIST") llm_config = { "config_list": config_list, } assistant_config = { # define the openai assistant behavior as you need } oai_agent = GPTAssistantAgent( name="oai_agent", instructions="I'm an openai assistant running in autogen", llm_config=llm_config, assistant_config=assistant_config, ) ```
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
Use OpenAI Assistant Built-in Tools and Function Calling ### Code Interpreter The [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) empowers your agents to write and execute Python code in a secure environment provide by OpenAI. This unlocks several capabilities, including but not limited to: - Process data: Handle various data formats and manipulate data on the fly. - Generate outputs: Create new data files or even visualizations like graphs. - ... Using the Code Interpreter with the following configuration. ```python assistant_config = { "tools": [ {"type": "code_interpreter"}, ], "tool_resources": { "code_interpreter": { "file_ids": ["$file.id"] # optional. Files that are passed at the Assistant level are accessible by all Runs with this Assistant. } } } ``` To get the `file.id`, you can employ two methods: 1. OpenAI Playground: Leverage the OpenAI Playground, an interactive platform accessible at https://platform.openai.com/playground, to upload your files and obtain the corresponding file IDs. 2. Code-Based Uploading: Alternatively, you can upload files and retrieve their file IDs programmatically using the following code snippet: ```python from openai import OpenAI client = OpenAI( # Defaults to os.environ.get("OPENAI_API_KEY") ) # Upload a file with an "assistants" purpose file = client.files.create( file=open("mydata.csv", "rb"), purpose='assistants' ) ``` ### File Search The [File Search](https://platform.openai.com/docs/assistants/tools/file-search) tool empowers your agents to tap into knowledge beyond its pre-trained model. This allows you to incorporate your own documents and data, such as product information or code files, into your agent's capabilities. Using the File Search with the following configuration. ```python assistant_config = { "tools": [ {"type": "file_search"}, ], "tool_resources": { "file_search": { "vector_store_ids": ["$vector_store.id"] } } } ``` Here's how to obtain the vector_store.id using two methods: 1. OpenAI Playground: Leverage the OpenAI Playground, an interactive platform accessible at https://platform.openai.com/playground, to create a vector store, upload your files, and add it into your vector store. Once complete, you'll be able to retrieve the associated `vector_store.id`. 2. Code-Based Uploading:Alternatively, you can upload files and retrieve their file IDs programmatically using the following code snippet: ```python from openai import OpenAI client = OpenAI( # Defaults to os.environ.get("OPENAI_API_KEY") ) # Step 1: Create a Vector Store vector_store = client.beta.vector_stores.create(name="Financial Statements") print("Vector Store created:", vector_store.id) # This is your vector_store.id # Step 2: Prepare Files for Upload file_paths = ["edgar/goog-10k.pdf", "edgar/brka-10k.txt"] file_streams = [open(path, "rb") for path in file_paths] # Step 3: Upload Files and Add to Vector Store (with status polling) file_batch = client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store.id, files=file_streams ) # Step 4: Verify Completion (Optional) print("File batch status:", file_batch.status) print("Uploaded file count:", file_batch.file_counts.processed) ``` ### Function calling Function Calling empowers you to extend the capabilities of your agents with your pre-defined functionalities, which allows you to describe custom functions to the Assistant, enabling intelligent function selection and argument generation. Using the Function calling with the following configuration. ```python # learn more from https://platform.openai.com/docs/guides/function-calling/function-calling from autogen.function_utils import get_function_schema def get_current_weather(location: str) -> dict: """ Retrieves the current weather for a specified location. Args: location (str): The location to get the weather for. Returns: Union[str, dict]: A dictionary with weather details.. """ # Simulated response return { "location": location, "temperature": 22.5, "description": "Partly cloudy" } api_schema = get_function_schema( get_current_weather, name=get_current_weather.__name__, description="Returns the current weather data for a specified location." ) assistant_config = { "tools": [ { "type": "function", "function": api_schema, } ], } ```
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
# Enhanced Inference `autogen.OpenAIWrapper` provides enhanced LLM inference for `openai>=1`. `autogen.Completion` is a drop-in replacement of `openai.Completion` and `openai.ChatCompletion` for enhanced LLM inference using `openai<1`. There are a number of benefits of using `autogen` to perform inference: performance tuning, API unification, caching, error handling, multi-config inference, result filtering, templating and so on.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Tune Inference Parameters (for openai<1) Find a list of examples in this page: [Tune Inference Parameters Examples](../Examples.md#inference-hyperparameters-tuning) ### Choices to optimize The cost of using foundation models for text generation is typically measured in terms of the number of tokens in the input and output combined. From the perspective of an application builder using foundation models, the use case is to maximize the utility of the generated text under an inference budget constraint (e.g., measured by the average dollar cost needed to solve a coding problem). This can be achieved by optimizing the hyperparameters of the inference, which can significantly affect both the utility and the cost of the generated text. The tunable hyperparameters include: 1. model - this is a required input, specifying the model ID to use. 1. prompt/messages - the input prompt/messages to the model, which provides the context for the text generation task. 1. max_tokens - the maximum number of tokens (words or word pieces) to generate in the output. 1. temperature - a value between 0 and 1 that controls the randomness of the generated text. A higher temperature will result in more random and diverse text, while a lower temperature will result in more predictable text. 1. top_p - a value between 0 and 1 that controls the sampling probability mass for each token generation. A lower top_p value will make it more likely to generate text based on the most likely tokens, while a higher value will allow the model to explore a wider range of possible tokens. 1. n - the number of responses to generate for a given prompt. Generating multiple responses can provide more diverse and potentially more useful output, but it also increases the cost of the request. 1. stop - a list of strings that, when encountered in the generated text, will cause the generation to stop. This can be used to control the length or the validity of the output. 1. presence_penalty, frequency_penalty - values that control the relative importance of the presence and frequency of certain words or phrases in the generated text. 1. best_of - the number of responses to generate server-side when selecting the "best" (the one with the highest log probability per token) response for a given prompt. The cost and utility of text generation are intertwined with the joint effect of these hyperparameters. There are also complex interactions among subsets of the hyperparameters. For example, the temperature and top_p are not recommended to be altered from their default values together because they both control the randomness of the generated text, and changing both at the same time can result in conflicting effects; n and best_of are rarely tuned together because if the application can process multiple outputs, filtering on the server side causes unnecessary information loss; both n and max_tokens will affect the total number of tokens generated, which in turn will affect the cost of the request. These interactions and trade-offs make it difficult to manually determine the optimal hyperparameter settings for a given text generation task. *Do the choices matter? Check this [blogpost](/blog/2023/04/21/LLM-tuning-math) to find example tuning results about gpt-3.5-turbo and gpt-4.* With AutoGen, the tuning can be performed with the following information: 1. Validation data. 1. Evaluation function. 1. Metric to optimize. 1. Search space. 1. Budgets: inference and optimization respectively. ### Validation data Collect a diverse set of instances. They can be stored in an iterable of dicts. For example, each instance dict can contain "problem" as a key and the description str of a math problem as the value; and "solution" as a key and the solution str as the value. ### Evaluation function The evaluation function should take a list of responses, and other keyword arguments corresponding to the keys in each validation data instance as input, and output a dict of metrics. For example, ```python def eval_math_responses(responses: List[str], solution: str, **args) -> Dict: # select a response from the list of responses answer = voted_answer(responses) # check whether the answer is correct return {"success": is_equivalent(answer, solution)} ``` `autogen.code_utils` and `autogen.math_utils` offer some example evaluation functions for code generation and math problem solving. ### Metric to optimize The metric to optimize is usually an aggregated metric over all the tuning data instances. For example, users can specify "success" as the metric and "max" as the optimization mode. By default, the aggregation function is taking the average. Users can provide a customized aggregation function if needed. ### Search space Users can specify the (optional) search range for each hyperparameter. 1. model. Either a constant str, or multiple choices specified by `flaml.tune.choice`. 1. prompt/messages. Prompt is either a str or a list of strs, of the prompt templates. messages is a list of dicts or a list of lists, of the message templates. Each prompt/message template will be formatted with each data instance. For example, the prompt template can be: "{problem} Solve the problem carefully. Simplify your answer as much as possible. Put the final answer in \\boxed{{}}." And `{problem}` will be replaced by the "problem" field of each data instance. 1. max_tokens, n, best_of. They can be constants, or specified by `flaml.tune.randint`, `flaml.tune.qrandint`, `flaml.tune.lograndint` or `flaml.qlograndint`. By default, max_tokens is searched in [50, 1000); n is searched in [1, 100); and best_of is fixed to 1. 1. stop. It can be a str or a list of strs, or a list of lists of strs or None. Default is None. 1. temperature or top_p. One of them can be specified as a constant or by `flaml.tune.uniform` or `flaml.tune.loguniform` etc. Please don't provide both. By default, each configuration will choose either a temperature or a top_p in [0, 1] uniformly. 1. presence_penalty, frequency_penalty. They can be constants or specified by `flaml.tune.uniform` etc. Not tuned by default. ### Budgets One can specify an inference budget and an optimization budget. The inference budget refers to the average inference cost per data instance. The optimization budget refers to the total budget allowed in the tuning process. Both are measured by dollars and follow the price per 1000 tokens. ### Perform tuning Now, you can use `autogen.Completion.tune` for tuning. For example, ```python import autogen config, analysis = autogen.Completion.tune( data=tune_data, metric="success", mode="max", eval_func=eval_func, inference_budget=0.05, optimization_budget=3, num_samples=-1, ) ``` `num_samples` is the number of configurations to sample. -1 means unlimited (until optimization budget is exhausted). The returned `config` contains the optimized configuration and `analysis` contains an ExperimentAnalysis object for all the tried configurations and results. The tuned config can be used to perform inference.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
API unification `autogen.OpenAIWrapper.create()` can be used to create completions for both chat and non-chat models, and both OpenAI API and Azure OpenAI API. ```python from autogen import OpenAIWrapper # OpenAI endpoint client = OpenAIWrapper() # ChatCompletion response = client.create(messages=[{"role": "user", "content": "2+2="}], model="gpt-3.5-turbo") # extract the response text print(client.extract_text_or_completion_object(response)) # get cost of this completion print(response.cost) # Azure OpenAI endpoint client = OpenAIWrapper(api_key=..., base_url=..., api_version=..., api_type="azure") # Completion response = client.create(prompt="2+2=", model="gpt-3.5-turbo-instruct") # extract the response text print(client.extract_text_or_completion_object(response)) ``` For local LLMs, one can spin up an endpoint using a package like [FastChat](https://github.com/lm-sys/FastChat), and then use the same API to send a request. See [here](/blog/2023/07/14/Local-LLMs) for examples on how to make inference with local LLMs. For custom model clients, one can register the client with `autogen.OpenAIWrapper.register_model_client` and then use the same API to send a request. See [here](/blog/2024/01/26/Custom-Models) for examples on how to make inference with custom model clients.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Usage Summary The `OpenAIWrapper` from `autogen` tracks token counts and costs of your API calls. Use the `create()` method to initiate requests and `print_usage_summary()` to retrieve a detailed usage report, including total cost and token usage for both cached and actual requests. - `mode=["actual", "total"]` (default): print usage summary for all completions and non-caching completions. - `mode='actual'`: only print non-cached usage. - `mode='total'`: only print all usage (including cache). Reset your session's usage data with `clear_usage_summary()` when needed. [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_client_cost.ipynb) Example usage: ```python from autogen import OpenAIWrapper client = OpenAIWrapper() client.create(messages=[{"role": "user", "content": "Python learning tips."}], model="gpt-3.5-turbo") client.print_usage_summary() # Display usage client.clear_usage_summary() # Reset usage data ``` Sample output: ``` Usage summary excluding cached usage: Total cost: 0.00015 * Model 'gpt-3.5-turbo': cost: 0.00015, prompt_tokens: 25, completion_tokens: 58, total_tokens: 83 Usage summary including cached usage: Total cost: 0.00027 * Model 'gpt-3.5-turbo': cost: 0.00027, prompt_tokens: 50, completion_tokens: 100, total_tokens: 150 ``` Note: if using a custom model client (see [here](/blog/2024/01/26/Custom-Models) for details) and if usage summary is not implemented, then the usage summary will not be available.
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Caching Moved to [here](/docs/topics/llm-caching).
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Error handling ### Runtime error One can pass a list of configurations of different models/endpoints to mitigate the rate limits and other runtime error. For example, ```python client = OpenAIWrapper( config_list=[ { "model": "gpt-4", "api_key": os.environ.get("AZURE_OPENAI_API_KEY"), "api_type": "azure", "base_url": os.environ.get("AZURE_OPENAI_API_BASE"), "api_version": "2024-02-01", }, { "model": "gpt-3.5-turbo", "api_key": os.environ.get("OPENAI_API_KEY"), "base_url": "https://api.openai.com/v1", }, { "model": "llama2-chat-7B", "base_url": "http://127.0.0.1:8080", }, { "model": "microsoft/phi-2", "model_client_cls": "CustomModelClient" } ], ) ``` `client.create()` will try querying Azure OpenAI gpt-4, OpenAI gpt-3.5-turbo, a locally hosted llama2-chat-7B, and phi-2 using a custom model client class named `CustomModelClient`, one by one, until a valid result is returned. This can speed up the development process where the rate limit is a bottleneck. An error will be raised if the last choice fails. So make sure the last choice in the list has the best availability. For convenience, we provide a number of utility functions to load config lists. - `get_config_list`: Generates configurations for API calls, primarily from provided API keys. - `config_list_openai_aoai`: Constructs a list of configurations using both Azure OpenAI and OpenAI endpoints, sourcing API keys from environment variables or local files. - `config_list_from_json`: Loads configurations from a JSON structure, either from an environment variable or a local JSON file, with the flexibility of filtering configurations based on given criteria. - `config_list_from_models`: Creates configurations based on a provided list of models, useful when targeting specific models without manually specifying each configuration. - `config_list_from_dotenv`: Constructs a configuration list from a `.env` file, offering a consolidated way to manage multiple API configurations and keys from a single file. We suggest that you take a look at this [notebook](/docs/topics/llm_configuration) for full code examples of the different methods to configure your model endpoints. ### Logic error Another type of error is that the returned response does not satisfy a requirement. For example, if the response is required to be a valid json string, one would like to filter the responses that are not. This can be achieved by providing a list of configurations and a filter function. For example, ```python def valid_json_filter(response, **_): for text in OpenAIWrapper.extract_text_or_completion_object(response): try: json.loads(text) return True except ValueError: pass return False client = OpenAIWrapper( config_list=[{"model": "text-ada-001"}, {"model": "gpt-3.5-turbo-instruct"}, {"model": "text-davinci-003"}], ) response = client.create( prompt="How to construct a json request to Bing API to search for 'latest AI news'? Return the JSON request.", filter_func=valid_json_filter, ) ``` The example above will try to use text-ada-001, gpt-3.5-turbo-instruct, and text-davinci-003 iteratively, until a valid json string is returned or the last config is used. One can also repeat the same model in the list for multiple times (with different seeds) to try one model multiple times for increasing the robustness of the final response. *Advanced use case: Check this [blogpost](/blog/2023/05/18/GPT-adaptive-humaneval) to find how to improve GPT-4's coding performance from 68% to 90% while reducing the inference cost.*
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Templating If the provided prompt or message is a template, it will be automatically materialized with a given context. For example, ```python response = client.create( context={"problem": "How many positive integers, not exceeding 100, are multiples of 2 or 3 but not 4?"}, prompt="{problem} Solve the problem carefully.", allow_format_str_template=True, **config ) ``` A template is either a format str, like the example above, or a function which produces a str from several input fields, like the example below. ```python def content(turn, context): return "\n".join( [ context[f"user_message_{turn}"], context[f"external_info_{turn}"] ] ) messages = [ { "role": "system", "content": "You are a teaching assistant of math.", }, { "role": "user", "content": partial(content, turn=0), }, ] context = { "user_message_0": "Could you explain the solution to Problem 1?", "external_info_0": "Problem 1: ...", } response = client.create(context=context, messages=messages, **config) messages.append( { "role": "assistant", "content": client.extract_text(response)[0] } ) messages.append( { "role": "user", "content": partial(content, turn=1), }, ) context.append( { "user_message_1": "Why can't we apply Theorem 1 to Equation (2)?", "external_info_1": "Theorem 1: ...", } ) response = client.create(context=context, messages=messages, **config) ```
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Logging When debugging or diagnosing an LLM-based system, it is often convenient to log the API calls and analyze them. ### For openai >= 1 Logging example: [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) #### Start logging: ```python import autogen.runtime_logging autogen.runtime_logging.start(logger_type="sqlite", config={"dbname": "YOUR_DB_NAME"}) ``` `logger_type` and `config` are both optional. Default logger type is SQLite logger, that's the only one available in autogen at the moment. If you want to customize the database name, you can pass in through config, default is `logs.db`. #### Stop logging: ```python autogen.runtime_logging.stop() ``` #### LLM Runs AutoGen logging supports OpenAI's llm message schema. Each LLM run is saved in `chat_completions` table includes: - session_id: an unique identifier for the logging session - invocation_id: an unique identifier for the logging record - client_id: an unique identifier for the Azure OpenAI/OpenAI client - request: detailed llm request, see below for an example - response: detailed llm response, see below for an example - cost: total cost for the request and response - start_time - end_time ##### Sample Request ```json { "messages":[ { "content":"system_message_1", "role":"system" }, { "content":"user_message_1", "role":"user" } ], "model":"gpt-4", "temperature": 0.9 } ``` ##### Sample Response ```json { "id": "id_1", "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "assistant_message_1", "role": "assistant", "function_call": null, "tool_calls": null } } ], "created": "<timestamp>", "model": "gpt-4", "object": "chat.completion", "system_fingerprint": null, "usage": { "completion_tokens": 155, "prompt_tokens": 53, "total_tokens": 208 } } ``` Learn more about [request and response format](https://platform.openai.com/docs/api-reference/chat/create) ### For openai < 1 `autogen.Completion` and `autogen.ChatCompletion` offer an easy way to collect the API call histories. For example, to log the chat histories, simply run: ```python autogen.ChatCompletion.start_logging() ``` The API calls made after this will be automatically logged. They can be retrieved at any time by: ```python autogen.ChatCompletion.logged_history ``` There is a function that can be used to print usage summary (total cost, and token count usage from each model): ```python autogen.ChatCompletion.print_usage_summary() ``` To stop logging, use ```python autogen.ChatCompletion.stop_logging() ``` If one would like to append the history to an existing dict, pass the dict like: ```python autogen.ChatCompletion.start_logging(history_dict=existing_history_dict) ``` By default, the counter of API calls will be reset at `start_logging()`. If no reset is desired, set `reset_counter=False`. There are two types of logging formats: compact logging and individual API call logging. The default format is compact. Set `compact=False` in `start_logging()` to switch. * Example of a history dict with compact logging. ```python { """ [ { 'role': 'system', 'content': system_message, }, { 'role': 'user', 'content': user_message_1, }, { 'role': 'assistant', 'content': assistant_message_1, }, { 'role': 'user', 'content': user_message_2, }, { 'role': 'assistant', 'content': assistant_message_2, }, ]""": { "created_at": [0, 1], "cost": [0.1, 0.2], } } ``` * Example of a history dict with individual API call logging. ```python { 0: { "request": { "messages": [ { "role": "system", "content": system_message, }, { "role": "user", "content": user_message_1, } ], ... # other parameters in the request }, "response": { "choices": [ "messages": { "role": "assistant", "content": assistant_message_1, }, ], ... # other fields in the response } }, 1: { "request": { "messages": [ { "role": "system", "content": system_message, }, { "role": "user", "content": user_message_1, }, { "role": "assistant", "content": assistant_message_1, }, { "role": "user", "content": user_message_2, }, ], ... # other parameters in the request }, "response": { "choices": [ "messages": { "role": "assistant", "content": assistant_message_2, }, ], ... # other fields in the response } }, } ``` * Example of printing for usage summary ``` Total cost: <cost> Token count summary for model <model>: prompt_tokens: <count 1>, completion_tokens: <count 2>, total_tokens: <count 3> ``` It can be seen that the individual API call history contains redundant information of the conversation. For a long conversation the degree of redundancy is high. The compact history is more efficient and the individual API call history contains more details.
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
# Multi-agent Conversation Framework AutoGen offers a unified multi-agent conversation framework as a high-level abstraction of using foundation models. It features capable, customizable and conversable agents which integrate LLMs, tools, and humans via automated agent chat. By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. This framework simplifies the orchestration, automation and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses. It enables building next-gen LLM applications based on multi-agent conversations with minimal effort. ### Agents AutoGen abstracts and implements conversable agents designed to solve tasks through inter-agent conversations. Specifically, the agents in AutoGen have the following notable features: - Conversable: Agents in AutoGen are conversable, which means that any agent can send and receive messages from other agents to initiate or continue a conversation - Customizable: Agents in AutoGen can be customized to integrate LLMs, humans, tools, or a combination of them. The figure below shows the built-in agents in AutoGen. ![Agent Chat Example](images/autogen_agents.png) We have designed a generic [`ConversableAgent`](../reference/agentchat/conversable_agent.md#conversableagent-objects) class for Agents that are capable of conversing with each other through the exchange of messages to jointly finish a task. An agent can communicate with other agents and perform actions. Different agents can differ in what actions they perform after receiving messages. Two representative subclasses are [`AssistantAgent`](../reference/agentchat/assistant_agent.md#assistantagent-objects) and [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) - The [`AssistantAgent`](../reference/agentchat/assistant_agent.md#assistantagent-objects) is designed to act as an AI assistant, using LLMs by default but not requiring human input or code execution. It could write Python code (in a Python coding block) for a user to execute when a message (typically a description of a task that needs to be solved) is received. Under the hood, the Python code is written by LLM (e.g., GPT-4). It can also receive the execution results and suggest corrections or bug fixes. Its behavior can be altered by passing a new system message. The LLM [inference](#enhanced-inference) configuration can be configured via [`llm_config`]. - The [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) is conceptually a proxy agent for humans, soliciting human input as the agent's reply at each interaction turn by default and also having the capability to execute code and call functions or tools. The [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) triggers code execution automatically when it detects an executable code block in the received message and no human user input is provided. Code execution can be disabled by setting the `code_execution_config` parameter to False. LLM-based response is disabled by default. It can be enabled by setting `llm_config` to a dict corresponding to the [inference](/docs/Use-Cases/enhanced_inference) configuration. When `llm_config` is set as a dictionary, [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) can generate replies using an LLM when code execution is not performed. The auto-reply capability of [`ConversableAgent`](../reference/agentchat/conversable_agent.md#conversableagent-objects) allows for more autonomous multi-agent communication while retaining the possibility of human intervention. One can also easily extend it by registering reply functions with the [`register_reply()`](../reference/agentchat/conversable_agent.md#register_reply) method. In the following code, we create an [`AssistantAgent`](../reference/agentchat/assistant_agent.md#assistantagent-objects) named "assistant" to serve as the assistant and a [`UserProxyAgent`](../reference/agentchat/user_proxy_agent.md#userproxyagent-objects) named "user_proxy" to serve as a proxy for the human user. We will later employ these two agents to solve a task. ```python import os from autogen import AssistantAgent, UserProxyAgent from autogen.coding import DockerCommandLineCodeExecutor config_list = [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}] # create an AssistantAgent instance named "assistant" with the LLM configuration. assistant = AssistantAgent(name="assistant", llm_config={"config_list": config_list}) # create a UserProxyAgent instance named "user_proxy" with code execution on docker. code_executor = DockerCommandLineCodeExecutor() user_proxy = UserProxyAgent(name="user_proxy", code_execution_config={"executor": code_executor}) ```
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
Multi-agent Conversations ### A Basic Two-Agent Conversation Example Once the participating agents are constructed properly, one can start a multi-agent conversation session by an initialization step as shown in the following code: ```python # the assistant receives a message from the user, which contains the task description user_proxy.initiate_chat( assistant, message="""What date is today? Which big tech stock has the largest year-to-date gain this year? How much is the gain?""", ) ``` After the initialization step, the conversation could proceed automatically. Find a visual illustration of how the user_proxy and assistant collaboratively solve the above task autonomously below: ![Agent Chat Example](images/agent_example.png) 1. The assistant receives a message from the user_proxy, which contains the task description. 2. The assistant then tries to write Python code to solve the task and sends the response to the user_proxy. 3. Once the user_proxy receives a response from the assistant, it tries to reply by either soliciting human input or preparing an automatically generated reply. If no human input is provided, the user_proxy executes the code and uses the result as the auto-reply. 4. The assistant then generates a further response for the user_proxy. The user_proxy can then decide whether to terminate the conversation. If not, steps 3 and 4 are repeated. ### Supporting Diverse Conversation Patterns #### Conversations with different levels of autonomy, and human-involvement patterns On the one hand, one can achieve fully autonomous conversations after an initialization step. On the other hand, AutoGen can be used to implement human-in-the-loop problem-solving by configuring human involvement levels and patterns (e.g., setting the `human_input_mode` to `ALWAYS`), as human involvement is expected and/or desired in many applications. #### Static and dynamic conversations AutoGen, by integrating conversation-driven control utilizing both programming and natural language, inherently supports dynamic conversations. This dynamic nature allows the agent topology to adapt based on the actual conversation flow under varying input problem scenarios. Conversely, static conversations adhere to a predefined topology. Dynamic conversations are particularly beneficial in complex settings where interaction patterns cannot be predetermined. 1. Registered auto-reply With the pluggable auto-reply function, one can choose to invoke conversations with other agents depending on the content of the current message and context. For example: - Hierarchical chat like in [OptiGuide](https://github.com/microsoft/optiguide). - [Dynamic Group Chat](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb) which is a special form of hierarchical chat. In the system, we register a reply function in the group chat manager, which broadcasts messages and decides who the next speaker will be in a group chat setting. - [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) which is a special form of dynamic group chat. In this approach, a directed transition matrix is fed into group chat. Users can specify legal transitions or specify disallowed transitions. - Nested chat like in [conversational chess](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_chess.ipynb). 2. LLM-Based Function Call Another approach involves LLM-based function calls, where LLM decides if a specific function should be invoked based on the conversation's status during each inference. This approach enables dynamic multi-agent conversations, as seen in scenarios like [multi-user math problem solving scenario](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb), where a student assistant automatically seeks expertise via function calls. ### Diverse Applications Implemented with AutoGen The figure below shows six examples of applications built using AutoGen. ![Applications](images/app.png) Find a list of examples in this page: [Automated Agent Chat Examples](../Examples.md#automated-multi-agent-chat)
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
For Further Reading _Interested in the research that leads to this package? Please check the following papers._ - [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155). Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang and Chi Wang. ArXiv 2023. - [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337). Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2306.01337 (2023).
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
# AutoGen Studio FAQs
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: How do I specify the directory where files(e.g. database) are stored? A: You can specify the directory where files are stored by setting the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database (default) and other files in the specified directory e.g. `/path/to/folder/database.sqlite`.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Where can I adjust the default skills, agent and workflow configurations? A: You can modify agent configurations directly from the UI or by editing the `init_db_samples` function in the `autogenstudio/database/utils.py` file which is used to initialize the database.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: If I want to reset the entire conversation with an agent, how do I go about it? A: To reset your conversation history, you can delete the `database.sqlite` file in the `--appdir` directory. This will reset the entire conversation history. To delete user files, you can delete the `files` directory in the `--appdir` directory.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Is it possible to view the output and messages generated by the agents during interactions? A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I use other models with AutoGen Studio? Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure". For other OSS models, we recommend using a server such as vllm, LMStudio, Ollama, to instantiate an openai compliant endpoint.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: The server starts but I can't access the UI A: If you are running the server on a remote machine (or a local machine that fails to resolve localhost correctly), you may need to specify the host address. By default, the host address is set to `localhost`. You can specify the host address using the `--host <host>` argument. For example, to start the server on port 8081 and local address such that it is accessible from other machines on the network, you can run the following command: ```bash autogenstudio ui --port 8081 --host 0.0.0.0 ```
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I export my agent workflows for use in a python app? Yes. In the Build view, you can click the export button to save your agent workflow as a JSON file. This file can be imported in a python application using the `WorkflowManager` class. For example: ```python from autogenstudio import WorkflowManager # load workflow from exported json workflow file. workflow_manager = WorkflowManager(workflow="path/to/your/workflow_.json") # run the workflow on a task task_query = "What is the height of the Eiffel Tower?. Dont write code, just respond to the question." workflow_manager.run(message=task_query) ```
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I deploy my agent workflows as APIs? Yes. You can launch the workflow as an API endpoint from the command line using the `autogenstudio` commandline tool. For example: ```bash autogenstudio serve --workflow=workflow.json --port=5000 ``` Similarly, the workflow launch command above can be wrapped into a Dockerfile that can be deployed on cloud services like Azure Container Apps or Azure Web Apps.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I run AutoGen Studio in a Docker container? A: Yes, you can run AutoGen Studio in a Docker container. You can build the Docker image using the provided [Dockerfile](https://github.com/microsoft/autogen/blob/autogenstudio/samples/apps/autogen-studio/Dockerfile) and run the container using the following commands: ```bash FROM python:3.10 WORKDIR /code RUN pip install -U gunicorn autogenstudio RUN useradd -m -u 1000 user USER user ENV HOME=/home/user \ PATH=/home/user/.local/bin:$PATH \ AUTOGENSTUDIO_APPDIR=/home/user/app WORKDIR $HOME/app COPY --chown=user . $HOME/app CMD gunicorn -w $((2 * $(getconf _NPROCESSORS_ONLN) + 1)) --timeout 12600 -k uvicorn.workers.UvicornWorker autogenstudio.web.app:app --bind "0.0.0.0:8081" ``` Using Gunicorn as the application server for improved performance is recommended. To run AutoGen Studio with Gunicorn, you can use the following command: ```bash gunicorn -w $((2 * $(getconf _NPROCESSORS_ONLN) + 1)) --timeout 12600 -k uvicorn.workers.UvicornWorker autogenstudio.web.app:app --bind ```
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
# AutoGen Studio - Getting Started [![PyPI version](https://badge.fury.io/py/autogenstudio.svg)](https://badge.fury.io/py/autogenstudio) [![Downloads](https://static.pepy.tech/badge/autogenstudio/week)](https://pepy.tech/project/autogenstudio) ![ARA](./img/ara_stockprices.png) AutoGen Studio is an low-code interface built to help you rapidly prototype AI agents, enhance them with skills, compose them into workflows and interact with them to accomplish tasks. It is built on top of the [AutoGen](https://microsoft.github.io/autogen) framework, which is a toolkit for building AI agents. Code for AutoGen Studio is on GitHub at [microsoft/autogen](https://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio) > **Note**: AutoGen Studio is meant to help you rapidly prototype multi-agent workflows and demonstrate an example of end user interfaces built with AutoGen. It is not meant to be a production-ready app. Developers are encouraged to use the AutoGen framework to build their own applications, implementing authentication, security and other features required for deployed applications. **Updates** - April 17: AutoGen Studio database layer is now rewritten to use [SQLModel](https://sqlmodel.tiangolo.com/) (Pydantic + SQLAlchemy). This provides entity linking (skills, models, agents and workflows are linked via association tables) and supports multiple [database backend dialects](https://docs.sqlalchemy.org/en/20/dialects/) supported in SQLAlchemy (SQLite, PostgreSQL, MySQL, Oracle, Microsoft SQL Server). The backend database can be specified with a `--database-uri` argument when running the application. For example, `autogenstudio ui --database-uri sqlite:///database.sqlite` for SQLite and `autogenstudio ui --database-uri postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. - March 12: Default directory for AutoGen Studio is now /home/<user>/.autogenstudio. You can also specify this directory using the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database and other files in the specified directory e.g. `/path/to/folder/database.sqlite`. `.env` files in that directory will be used to set environment variables for the app. ### Installation There are two ways to install AutoGen Studio - from PyPi or from source. We **recommend installing from PyPi** unless you plan to modify the source code. 1. **Install from PyPi** We recommend using a virtual environment (e.g., conda) to avoid conflicts with existing Python packages. With Python 3.10 or newer active in your virtual environment, use pip to install AutoGen Studio: ```bash pip install autogenstudio ``` 2. **Install from Source** > Note: This approach requires some familiarity with building interfaces in React. If you prefer to install from source, ensure you have Python 3.10+ and Node.js (version above 14.15.0) installed. Here's how you get started: - Clone the AutoGen Studio repository and install its Python dependencies: ```bash pip install -e . ``` - Navigate to the `samples/apps/autogen-studio/frontend` directory, install dependencies, and build the UI: ```bash npm install -g gatsby-cli npm install --global yarn cd frontend yarn install yarn build ``` For Windows users, to build the frontend, you may need alternative commands to build the frontend. ```bash gatsby clean && rmdir /s /q ..\\autogenstudio\\web\\ui 2>nul & (set \"PREFIX_PATH_VALUE=\" || ver>nul) && gatsby build --prefix-paths && xcopy /E /I /Y public ..\\autogenstudio\\web\\ui ``` ### Running the Application Once installed, run the web UI by entering the following in your terminal: ```bash autogenstudio ui --port 8081 ``` This will start the application on the specified port. Open your web browser and go to `http://localhost:8081/` to begin using AutoGen Studio. AutoGen Studio also takes several parameters to customize the application: - `--host <host>` argument to specify the host address. By default, it is set to `localhost`. Y - `--appdir <appdir>` argument to specify the directory where the app files (e.g., database and generated user files) are stored. By default, it is set to the a `.autogenstudio` directory in the user's home directory. - `--port <port>` argument to specify the port number. By default, it is set to `8080`. - `--reload` argument to enable auto-reloading of the server when changes are made to the code. By default, it is set to `False`. - `--database-uri` argument to specify the database URI. Example values include `sqlite:///database.sqlite` for SQLite and `postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. If this is not specified, the database URI defaults to a `database.sqlite` file in the `--appdir` directory. Now that you have AutoGen Studio installed and running, you are ready to explore its capabilities, including defining and modifying agent workflows, interacting with agents and sessions, and expanding agent skills. ### Capabilities / Roadmap Some of the capabilities supported by the app frontend include the following: - [x] Build / Configure agents (currently supports two agent workflows based on `UserProxyAgent` and `AssistantAgent`), modify their configuration (e.g. skills, temperature, model, agent system message, model etc) and compose them into workflows. - [x] Chat with agent workflows and specify tasks. - [x] View agent messages and output files in the UI from agent runs. - [x] Support for more complex agent workflows (e.g. `GroupChat` and `Sequential` workflows). - [x] Improved user experience (e.g., streaming intermediate model output, better summarization of agent responses, etc). Review project roadmap and issues [here](https://github.com/microsoft/autogen/issues/737) . Project Structure: - _autogenstudio/_ code for the backend classes and web api (FastAPI) - _frontend/_ code for the webui, built with Gatsby and TailwindCSS
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
Contribution Guide We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project: - Review the overall AutoGen project [contribution guide](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing) - Please review the AutoGen Studio [roadmap](https://github.com/microsoft/autogen/issues/737) to get a sense of the current priorities for the project. Help is appreciated especially with Studio issues tagged with `help-wanted` - Please initiate a discussion on the roadmap issue or a new issue to discuss your proposed contribution. - Please review the autogenstudio dev branch here [dev branch](https://github.com/microsoft/autogen/tree/autogenstudio) and use as a base for your contribution. This way, your contribution will be aligned with the latest changes in the AutoGen Studio project. - Submit a pull request with your contribution! - If you are modifying AutoGen Studio, it has its own devcontainer. See instructions in `.devcontainer/README.md` to use it - Please use the tag `studio` for any issues, questions, and PRs related to Studio
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
A Note on Security AutoGen Studio is a research prototype and is not meant to be used in a production environment. Some baseline practices are encouraged e.g., using Docker code execution environment for your agents. However, other considerations such as rigorous tests related to jailbreaking, ensuring LLMs only have access to the right keys of data given the end user's permissions, and other security features are not implemented in AutoGen Studio. If you are building a production application, please use the AutoGen framework and implement the necessary security features.
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
Acknowledgements AutoGen Studio is Based on the [AutoGen](https://microsoft.github.io/autogen) project. It was adapted from a research prototype built in October 2023 (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang).
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
# Using AutoGen Studio AutoGen Studio supports the declarative creation of an agent workflow and tasks can be specified and run in a chat interface for the agents to complete. The expected usage behavior is that developers can create skills and models, _attach_ them to agents, and compose agents into workflows that can be tested interactively in the chat interface.
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Building an Agent Workflow AutoGen Studio implements several entities that are ultimately composed into a workflow. ### Skills A skill is a python function that implements the solution to a task. In general, a good skill has a descriptive name (e.g. generate*images), extensive docstrings and good defaults (e.g., writing out files to disk for persistence and reuse). Skills can be \_associated with* or _attached to_ agent specifications. ![AutoGen Studio Skill Interface](./img/skill.png) ### Models A model refers to the configuration of an LLM. Similar to skills, a model can be attached to an agent specification. The AutoGen Studio interface supports multiple model types including OpenAI models (and any other model endpoint provider that supports the OpenAI endpoint specification), Azure OpenAI models and Gemini Models. ![AutoGen Studio Create new model](./img/model_new.png) ![AutoGen Studio Create new model](./img/model_openai.png) ### Agents An agent entity declaratively specifies properties for an AutoGen agent (mirrors most but not all of the members of a base AutoGen Conversable agent class). Currently `UserProxyAgent` and `AssistantAgent` and `GroupChat` agent abstractions are supported. ![AutoGen Studio Create new agent](./img/agent_new.png) ![AutoGen Studio Createan assistant agent](./img/agent_groupchat.png) Once agents have been created, existing models or skills can be _added_ to the agent. ![AutoGen Studio Add skills and models to agent](./img/agent_skillsmodel.png) ### Workflows An agent workflow is a specification of a set of agents (team of agents) that can work together to accomplish a task. AutoGen Studio supports two types of high level workflow patterns: #### Autonomous Chat : This workflow implements a paradigm where agents are defined and a chat is initiated between the agents to accomplish a task. AutoGen simplifies this into defining an `initiator` agent and a `receiver` agent where the receiver agent is selected from a list of previously created agents. Note that when the receiver is a `GroupChat` agent (i.e., contains multiple agents), the communication pattern between those agents is determined by the `speaker_selection_method` parameter in the `GroupChat` agent configuration. ![AutoGen Studio Autonomous Chat Workflow](./img/workflow_chat.png) #### Sequential Chat This workflow allows users to specify a list of `AssistantAgent` agents that are executed in sequence to accomplish a task. The runtime behavior here follows the following pattern: at each step, each `AssistantAgent` is _paired_ with a `UserProxyAgent` and chat initiated between this pair to process the input task. The result of this exchange is summarized and provided to the next `AssistantAgent` which is also paired with a `UserProxyAgent` and their summarized result is passed to the next `AssistantAgent` in the sequence. This continues until the last `AssistantAgent` in the sequence is reached. ![AutoGen Studio Sequential Workflow](./img/workflow_sequential.png) <!-- ``` Plot a chart of NVDA and TESLA stock price YTD. Save the result to a file named nvda_tesla.png ``` The agent workflow responds by _writing and executing code_ to create a python program to generate the chart with the stock prices. > Note than there could be multiple turns between the `AssistantAgent` and the `UserProxyAgent` to produce and execute the code in order to complete the task. ![ARA](./img/ara_stockprices.png) > Note: You can also view the debug console that generates useful information to see how the agents are interacting in the background. --> <!-- - Build: Users begin by constructing their workflows. They may incorporate previously developed skills/models into agents within the workflow. User's can immediately test their workflows in the the same view or in a saved session in the playground. - Playground: Users can start a new session, select an agent workflow, and engage in a "chat" with this agent workflow. It is important to note the significant differences between a traditional chat with a Large Language Model (LLM) and a chat with a group of agents. In the former, the response is typically a single formatted reply, while in the latter, it consists of a history of conversations among the agents.
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Entities and Concepts -->
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Testing an Agent Workflow AutoGen Studio allows users to interactively test workflows on tasks and review resulting artifacts (such as images, code, and documents). ![AutoGen Studio Test Workflow](./img/workflow_test.png) Users can also review the “inner monologue” of agent workflows as they address tasks, and view profiling information such as costs associated with the run (such as number of turns, number of tokens etc.), and agent actions (such as whether tools were called and the outcomes of code execution). ![AutoGen Studio Profile Workflow Results](./img/workflow_profile.png)
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Exporting Agent Workflows Users can download the skills, agents, and workflow configurations they create as well as share and reuse these artifacts. AutoGen Studio also offers a seamless process to export workflows and deploy them as application programming interfaces (APIs) that can be consumed in other applications deploying workflows as APIs. ### Export Workflow AutoGen Studio allows you to export a selected workflow as a JSON configuration file. Build -> Workflows -> (On workflow card) -> Export ![AutoGen Studio Export Workflow](./img/workflow_export.png) ### Using AutoGen Studio Workflows in a Python Application An exported workflow can be easily integrated into any Python application using the `WorkflowManager` class with just two lines of code. Underneath, the WorkflowManager rehydrates the workflow specification into AutoGen agents that are subsequently used to address tasks. ```python from autogenstudio import WorkflowManager # load workflow from exported json workflow file. workflow_manager = WorkflowManager(workflow="path/to/your/workflow_.json") # run the workflow on a task task_query = "What is the height of the Eiffel Tower?. Dont write code, just respond to the question." workflow_manager.run(message=task_query) ``` ### Deploying AutoGen Studio Workflows as APIs The workflow can be launched as an API endpoint from the command line using the autogenstudio commandline tool. ```bash autogenstudio serve --workflow=workflow.json --port=5000 ``` Similarly, the workflow launch command above can be wrapped into a Dockerfile that can be deployed on cloud services like Azure Container Apps or Azure Web Apps.
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
# Docker Docker, an indispensable tool in modern software development, offers a compelling solution for AutoGen's setup. Docker allows you to create consistent environments that are portable and isolated from the host OS. With Docker, everything AutoGen needs to run, from the operating system to specific libraries, is encapsulated in a container, ensuring uniform functionality across different systems. The Dockerfiles necessary for AutoGen are conveniently located in the project's GitHub repository at [https://github.com/microsoft/autogen/tree/main/.devcontainer](https://github.com/microsoft/autogen/tree/main/.devcontainer). **Pre-configured DockerFiles**: The AutoGen Project offers pre-configured Dockerfiles for your use. These Dockerfiles will run as is, however they can be modified to suit your development needs. Please see the README.md file in autogen/.devcontainer - **autogen_base_img**: For a basic setup, you can use the `autogen_base_img` to run simple scripts or applications. This is ideal for general users or those new to AutoGen. - **autogen_full_img**: Advanced users or those requiring more features can use `autogen_full_img`. Be aware that this version loads ALL THE THINGS and thus is very large. Take this into consideration if you build your application off of it.
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 1: Install Docker - **General Installation**: Follow the [official Docker installation instructions](https://docs.docker.com/get-docker/). This is your first step towards a containerized environment, ensuring a consistent and isolated workspace for AutoGen. - **For Mac Users**: If you encounter issues with the Docker daemon, consider using [colima](https://smallsharpsoftwaretools.com/tutorials/use-colima-to-run-docker-containers-on-macos/). Colima offers a lightweight alternative to manage Docker containers efficiently on macOS.
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 2: Build a Docker Image AutoGen now provides updated Dockerfiles tailored for different needs. Building a Docker image is akin to setting the foundation for your project's environment: - **Autogen Basic**: Ideal for general use, this setup includes common Python libraries and essential dependencies. Perfect for those just starting with AutoGen. ```bash docker build -f .devcontainer/Dockerfile -t autogen_base_img https://github.com/microsoft/autogen.git#main ``` - **Autogen Advanced**: Advanced users or those requiring all the things that AutoGen has to offer `autogen_full_img` ```bash docker build -f .devcontainer/full/Dockerfile -t autogen_full_img https://github.com/microsoft/autogen.git#main ```
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 3: Run AutoGen Applications from Docker Image Here's how you can run an application built with AutoGen, using the Docker image: 1. **Mount Your Directory**: Use the Docker `-v` flag to mount your local application directory to the Docker container. This allows you to develop on your local machine while running the code in a consistent Docker environment. For example: ```bash docker run -it -v $(pwd)/myapp:/home/autogen/autogen/myapp autogen_base_img:latest python /home/autogen/autogen/myapp/main.py ``` Here, `$(pwd)/myapp` is your local directory, and `/home/autogen/autogen/myapp` is the path in the Docker container where your code will be located. 2. **Mount your code:** Now suppose you have your application built with AutoGen in a main script named `twoagent.py` ([example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py)) in a folder named `myapp`. With the command line below, you can mount your folder and run the application in Docker. ```python # Mount the local folder `myapp` into docker image and run the script named "twoagent.py" in the docker. docker run -it -v `pwd`/myapp:/myapp autogen_img:latest python /myapp/main_twoagent.py ``` 3. **Port Mapping**: If your application requires a specific port, use the `-p` flag to map the container's port to your host. For instance, if your app runs on port 3000 inside Docker and you want it accessible on port 8080 on your host machine: ```bash docker run -it -p 8080:3000 -v $(pwd)/myapp:/myapp autogen_base_img:latest python /myapp ``` In this command, `-p 8080:3000` maps port 3000 from the container to port 8080 on your local machine. 4. **Examples of Running Different Applications**: Here is the basic format of the docker run command. ```bash docker run -it -p {WorkstationPortNum}:{DockerPortNum} -v {WorkStation_Dir}:{Docker_DIR} {name_of_the_image} {bash/python} {Docker_path_to_script_to_execute} ``` - _Simple Script_: Run a Python script located in your local `myapp` directory. ```bash docker run -it -v `pwd`/myapp:/myapp autogen_base_img:latest python /myapp/my_script.py ``` - _Web Application_: If your application includes a web server running on port 5000. ```bash docker run -it -p 8080:5000 -v $(pwd)/myapp:/myapp autogen_base_img:latest ``` - _Data Processing_: For tasks that involve processing data stored in a local directory. ```bash docker run -it -v $(pwd)/data:/data autogen_base_img:latest python /myapp/process_data.py ```
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Additional Resources - Details on all the Dockerfile options can be found in the [Dockerfile](https://github.com/microsoft/autogen/.devcontainer/README.md) README. - For more information on Docker usage and best practices, refer to the [official Docker documentation](https://docs.docker.com). - Details on how to use the Dockerfile dev version can be found on the [Contributor Guide](/docs/contributor-guide/docker).
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
# Optional Dependencies
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
LLM Caching To use LLM caching with Redis, you need to install the Python package with the option `redis`: ```bash pip install "pyautogen[redis]" ``` See [LLM Caching](Use-Cases/agent_chat.md#llm-caching) for details.