Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
2,400
Tool(name='foo-11', description='a silly function that you can use to get more information about the number 11', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)]Prompt template‚ÄãThe prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done.# Set up the base templatetemplate = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:{tools}Use the following format:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input questionBegin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"sQuestion: {input}{agent_scratchpad}"""The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use.from typing import Callable# Set up a prompt templateclass CustomPromptTemplate(StringPromptTemplate): # The template to use template: str ############## NEW ###################### # The list of tools available tools_getter: Callable def format(self, **kwargs) -> str: # Get the intermediate steps (AgentAction, Observation tuples) # Format them in a particular way intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " # Set the agent_scratchpad
This notebook builds off of this notebook and assumes familiarity with how agents work.
This notebook builds off of this notebook and assumes familiarity with how agents work. ->: Tool(name='foo-11', description='a silly function that you can use to get more information about the number 11', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)]Prompt template‚ÄãThe prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done.# Set up the base templatetemplate = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:{tools}Use the following format:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input questionBegin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"sQuestion: {input}{agent_scratchpad}"""The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use.from typing import Callable# Set up a prompt templateclass CustomPromptTemplate(StringPromptTemplate): # The template to use template: str ############## NEW ###################### # The list of tools available tools_getter: Callable def format(self, **kwargs) -> str: # Get the intermediate steps (AgentAction, Observation tuples) # Format them in a particular way intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " # Set the agent_scratchpad
2,401
" # Set the agent_scratchpad variable to that value kwargs["agent_scratchpad"] = thoughts ############## NEW ###################### tools = self.tools_getter(kwargs["input"]) # Create a tools variable from the list of tools provided kwargs["tools"] = "\n".join( [f"{tool.name}: {tool.description}" for tool in tools] ) # Create a list of tool names for the tools provided kwargs["tool_names"] = ", ".join([tool.name for tool in tools]) return self.template.format(**kwargs)prompt = CustomPromptTemplate( template=template, tools_getter=get_tools, # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically # This includes the `intermediate_steps` variable because that is needed input_variables=["input", "intermediate_steps"],)Output parser‚ÄãThe output parser is unchanged from the previous notebook, since we are not changing anything about the output format.class CustomOutputParser(AgentOutputParser): def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: # Check if agent should finish if "Final Answer:" in llm_output: return AgentFinish( # Return values is generally always a dictionary with a single `output` key # It is not recommended to try anything else at the moment :) return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) # Parse out the action and action input regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)" match = re.search(regex, llm_output, re.DOTALL) if not match: raise ValueError(f"Could not parse LLM output: `{llm_output}`") action = match.group(1).strip() action_input = match.group(2) # Return the action and action input return AgentAction( tool=action,
This notebook builds off of this notebook and assumes familiarity with how agents work.
This notebook builds off of this notebook and assumes familiarity with how agents work. ->: " # Set the agent_scratchpad variable to that value kwargs["agent_scratchpad"] = thoughts ############## NEW ###################### tools = self.tools_getter(kwargs["input"]) # Create a tools variable from the list of tools provided kwargs["tools"] = "\n".join( [f"{tool.name}: {tool.description}" for tool in tools] ) # Create a list of tool names for the tools provided kwargs["tool_names"] = ", ".join([tool.name for tool in tools]) return self.template.format(**kwargs)prompt = CustomPromptTemplate( template=template, tools_getter=get_tools, # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically # This includes the `intermediate_steps` variable because that is needed input_variables=["input", "intermediate_steps"],)Output parser‚ÄãThe output parser is unchanged from the previous notebook, since we are not changing anything about the output format.class CustomOutputParser(AgentOutputParser): def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: # Check if agent should finish if "Final Answer:" in llm_output: return AgentFinish( # Return values is generally always a dictionary with a single `output` key # It is not recommended to try anything else at the moment :) return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) # Parse out the action and action input regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)" match = re.search(regex, llm_output, re.DOTALL) if not match: raise ValueError(f"Could not parse LLM output: `{llm_output}`") action = match.group(1).strip() action_input = match.group(2) # Return the action and action input return AgentAction( tool=action,
2,402
return AgentAction( tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output )output_parser = CustomOutputParser()Set up LLM, stop sequence, and the agent​Also the same as the previous notebook.llm = OpenAI(temperature=0)# LLM chain consisting of the LLM and a promptllm_chain = LLMChain(llm=llm, prompt=prompt)tools = get_tools("whats the weather?")tool_names = [tool.name for tool in tools]agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tool_names,)Use the Agent​Now we can use it!agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run("What's the weather in SF?") > Entering new AgentExecutor chain... Thought: I need to find out what the weather is in SF Action: Search Action Input: Weather in SF Observation:Mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shifting to W at 10 to 15 mph. Humidity71%. UV Index6 of 10. I now know the final answer Final Answer: 'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10. > Finished chain. "'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10."PreviousCustom agentNextCustom LLM agentSet up environmentSet up toolsTool RetrieverPrompt templateOutput parserSet up LLM, stop sequence, and the agentUse the AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook builds off of this notebook and assumes familiarity with how agents work.
This notebook builds off of this notebook and assumes familiarity with how agents work. ->: return AgentAction( tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output )output_parser = CustomOutputParser()Set up LLM, stop sequence, and the agent​Also the same as the previous notebook.llm = OpenAI(temperature=0)# LLM chain consisting of the LLM and a promptllm_chain = LLMChain(llm=llm, prompt=prompt)tools = get_tools("whats the weather?")tool_names = [tool.name for tool in tools]agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tool_names,)Use the Agent​Now we can use it!agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run("What's the weather in SF?") > Entering new AgentExecutor chain... Thought: I need to find out what the weather is in SF Action: Search Action Input: Weather in SF Observation:Mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shifting to W at 10 to 15 mph. Humidity71%. UV Index6 of 10. I now know the final answer Final Answer: 'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10. > Finished chain. "'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10."PreviousCustom agentNextCustom LLM agentSet up environmentSet up toolsTool RetrieverPrompt templateOutput parserSet up LLM, stop sequence, and the agentUse the AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,403
Agent Types | 🦜️🔗 Langchain
Agents use an LLM to determine which actions to take and in what order.
Agents use an LLM to determine which actions to take and in what order. ->: Agent Types | 🦜️🔗 Langchain
2,404
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesOn this pageAgent TypesAgents use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning a response to the user. Here are the agents available in LangChain.Zero-shot ReAct​This agent uses the ReAct framework to determine which tool to use based solely on the tool's description. Any number of tools can be provided. This agent requires that a description is provided for each tool.Note: This is the most general purpose action agent.Structured input ReAct​The structured tool chat agent is capable of using multi-input tools. Older agents are configured to specify an action input as a single string, but this agent can use a tools' argument schema to create a structured action input. This is useful for more complex tool usage, like precisely navigating around a browser.OpenAI Functions​Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. The OpenAI Functions Agent is designed to work with these models.Conversational​This agent is designed to be used in conversational settings. The prompt is designed to make the agent helpful and conversational.
Agents use an LLM to determine which actions to take and in what order.
Agents use an LLM to determine which actions to take and in what order. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesOn this pageAgent TypesAgents use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning a response to the user. Here are the agents available in LangChain.Zero-shot ReAct​This agent uses the ReAct framework to determine which tool to use based solely on the tool's description. Any number of tools can be provided. This agent requires that a description is provided for each tool.Note: This is the most general purpose action agent.Structured input ReAct​The structured tool chat agent is capable of using multi-input tools. Older agents are configured to specify an action input as a single string, but this agent can use a tools' argument schema to create a structured action input. This is useful for more complex tool usage, like precisely navigating around a browser.OpenAI Functions​Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. The OpenAI Functions Agent is designed to work with these models.Conversational​This agent is designed to be used in conversational settings. The prompt is designed to make the agent helpful and conversational.
2,405
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.Self-ask with search​This agent utilizes a single tool that should be named Intermediate Answer. This tool should be able to lookup factual answers to questions. This agent is equivalent to the original self-ask with search paper, where a Google search API was provided as the tool.ReAct document store​This agent uses the ReAct framework to interact with a docstore. Two tools must be provided: a Search tool and a Lookup tool (they must be named exactly as so). The Search tool should search for a document, while the Lookup tool should lookup a term in the most recently found document. This agent is equivalent to the original ReAct paper, specifically the Wikipedia example.PreviousAgentsNextConversationalZero-shot ReActStructured input ReActOpenAI FunctionsConversationalSelf-ask with searchReAct document storeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Agents use an LLM to determine which actions to take and in what order.
Agents use an LLM to determine which actions to take and in what order. ->: It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.Self-ask with search​This agent utilizes a single tool that should be named Intermediate Answer. This tool should be able to lookup factual answers to questions. This agent is equivalent to the original self-ask with search paper, where a Google search API was provided as the tool.ReAct document store​This agent uses the ReAct framework to interact with a docstore. Two tools must be provided: a Search tool and a Lookup tool (they must be named exactly as so). The Search tool should search for a document, while the Lookup tool should lookup a term in the most recently found document. This agent is equivalent to the original ReAct paper, specifically the Wikipedia example.PreviousAgentsNextConversationalZero-shot ReActStructured input ReActOpenAI FunctionsConversationalSelf-ask with searchReAct document storeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,406
OpenAI functions | 🦜️🔗 Langchain
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API. ->: OpenAI functions | 🦜️🔗 Langchain
2,407
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesOpenAI functionsOn this pageOpenAI functionsCertain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.The OpenAI Functions Agent is designed to work with these models.Install openai, google-search-results packages which are required as the LangChain packages call them internally.pip install openai google-search-resultsInitialize tools​We will first create some tools we can usefrom langchain.agents import initialize_agent, AgentType, Toolfrom langchain.chains import LLMMathChainfrom langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapper, SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")search = SerpAPIWrapper()llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)tools = [ Tool( name="Search", func=search.run,
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesOpenAI functionsOn this pageOpenAI functionsCertain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.The OpenAI Functions Agent is designed to work with these models.Install openai, google-search-results packages which are required as the LangChain packages call them internally.pip install openai google-search-resultsInitialize tools​We will first create some tools we can usefrom langchain.agents import initialize_agent, AgentType, Toolfrom langchain.chains import LLMMathChainfrom langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapper, SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")search = SerpAPIWrapper()llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)tools = [ Tool( name="Search", func=search.run,
2,408
name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), Tool( name="FooBar-DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context" )]Using LCEL‚ÄãWe will first use LangChain Expression Language to create this agentfrom langchain.prompts import ChatPromptTemplate, MessagesPlaceholderprompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])from langchain.tools.render import format_tool_to_openai_functionllm_with_tools = llm.bind( functions=[format_tool_to_openai_function(t) for t in tools])from langchain.agents.format_scratchpad import format_to_openai_functionsfrom langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParseragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps'])} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"}) > Entering new AgentExecutor chain... Invoking: `Search` with `Leo DiCaprio's girlfriend` ['Blake Lively and DiCaprio are believed to have enjoyed a whirlwind five-month romance in 2011. The pair were seen on a yacht together in Cannes, ...'] Invoking: `Calculator` with `0.43` > Entering new LLMMathChain chain... 0.43```text 0.43 ```
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API. ->: name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), Tool( name="FooBar-DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context" )]Using LCEL‚ÄãWe will first use LangChain Expression Language to create this agentfrom langchain.prompts import ChatPromptTemplate, MessagesPlaceholderprompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])from langchain.tools.render import format_tool_to_openai_functionllm_with_tools = llm.bind( functions=[format_tool_to_openai_function(t) for t in tools])from langchain.agents.format_scratchpad import format_to_openai_functionsfrom langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParseragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps'])} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"}) > Entering new AgentExecutor chain... Invoking: `Search` with `Leo DiCaprio's girlfriend` ['Blake Lively and DiCaprio are believed to have enjoyed a whirlwind five-month romance in 2011. The pair were seen on a yacht together in Cannes, ...'] Invoking: `Calculator` with `0.43` > Entering new LLMMathChain chain... 0.43```text 0.43 ```
2,409
chain... 0.43```text 0.43 ``` ...numexpr.evaluate("0.43")... Answer: 0.43 > Finished chain. Answer: 0.43I'm sorry, but I couldn't find any information about Leo DiCaprio's current girlfriend. As for raising her age to the power of 0.43, I'm not sure what her current age is, so I can't provide an answer for that. > Finished chain. {'input': "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", 'output': "I'm sorry, but I couldn't find any information about Leo DiCaprio's current girlfriend. As for raising her age to the power of 0.43, I'm not sure what her current age is, so I can't provide an answer for that."}Using OpenAIFunctionsAgent​We can now use OpenAIFunctionsAgent, which creates this agent under the hoodagent_executor = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"})PreviousConversationalNextOpenAI Multi Functions AgentInitialize toolsUsing LCELUsing OpenAIFunctionsAgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API. ->: chain... 0.43```text 0.43 ``` ...numexpr.evaluate("0.43")... Answer: 0.43 > Finished chain. Answer: 0.43I'm sorry, but I couldn't find any information about Leo DiCaprio's current girlfriend. As for raising her age to the power of 0.43, I'm not sure what her current age is, so I can't provide an answer for that. > Finished chain. {'input': "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", 'output': "I'm sorry, but I couldn't find any information about Leo DiCaprio's current girlfriend. As for raising her age to the power of 0.43, I'm not sure what her current age is, so I can't provide an answer for that."}Using OpenAIFunctionsAgent​We can now use OpenAIFunctionsAgent, which creates this agent under the hoodagent_executor = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"})PreviousConversationalNextOpenAI Multi Functions AgentInitialize toolsUsing LCELUsing OpenAIFunctionsAgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,410
Structured tool chat | 🦜️🔗 Langchain
The structured tool chat agent is capable of using multi-input tools.
The structured tool chat agent is capable of using multi-input tools. ->: Structured tool chat | 🦜️🔗 Langchain
2,411
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesStructured tool chatOn this pageStructured tool chatThe structured tool chat agent is capable of using multi-input tools.Older agents are configured to specify an action input as a single string, but this agent can use the provided tools' args_schema to populate the action input.from langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.agents import initialize_agentInitialize Tools​We will test the agent using a web browserfrom langchain.agents.agent_toolkits import PlayWrightBrowserToolkitfrom langchain.tools.playwright.utils import ( create_async_playwright_browser, create_sync_playwright_browser, # A synchronous browser is available, though it isn't compatible with jupyter.)# This import is required only for jupyter notebooks, since they have their own eventloopimport nest_asyncionest_asyncio.apply()pip install playwrightplaywright installasync_browser = create_async_playwright_browser()browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)tools = browser_toolkit.get_tools()Use LCEL​We can first construct this agent using LangChain Expression Languagefrom langchain import hubprompt = hub.pull("hwchase17/react-multi-input-json")from langchain.tools.render import render_text_description_and_argsprompt = prompt.partial( tools=render_text_description_and_args(tools), tool_names=", ".join([t.name for t in tools]),)llm = ChatOpenAI(temperature=0)llm_with_stop =
The structured tool chat agent is capable of using multi-input tools.
The structured tool chat agent is capable of using multi-input tools. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesStructured tool chatOn this pageStructured tool chatThe structured tool chat agent is capable of using multi-input tools.Older agents are configured to specify an action input as a single string, but this agent can use the provided tools' args_schema to populate the action input.from langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.agents import initialize_agentInitialize Tools​We will test the agent using a web browserfrom langchain.agents.agent_toolkits import PlayWrightBrowserToolkitfrom langchain.tools.playwright.utils import ( create_async_playwright_browser, create_sync_playwright_browser, # A synchronous browser is available, though it isn't compatible with jupyter.)# This import is required only for jupyter notebooks, since they have their own eventloopimport nest_asyncionest_asyncio.apply()pip install playwrightplaywright installasync_browser = create_async_playwright_browser()browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)tools = browser_toolkit.get_tools()Use LCEL​We can first construct this agent using LangChain Expression Languagefrom langchain import hubprompt = hub.pull("hwchase17/react-multi-input-json")from langchain.tools.render import render_text_description_and_argsprompt = prompt.partial( tools=render_text_description_and_args(tools), tool_names=", ".join([t.name for t in tools]),)llm = ChatOpenAI(temperature=0)llm_with_stop =
2,412
= ChatOpenAI(temperature=0)llm_with_stop = llm.bind(stop=["Observation"])from langchain.agents.output_parsers import JSONAgentOutputParserfrom langchain.agents.format_scratchpad import format_log_to_stragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps']),} | prompt | llm_with_stop | JSONAgentOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)response = await agent_executor.ainvoke({"input": "Browse to blog.langchain.dev and summarize the text, please."})print(response['output']) > Entering new AgentExecutor chain... Action: ``` { "action": "navigate_browser", "action_input": { "url": "https://blog.langchain.dev" } } ``` Navigating to https://blog.langchain.dev returned status code 200Action: ``` { "action": "extract_text", "action_input": {} } ``` LangChain LangChain Home GitHub Docs By LangChain Release Notes Write with Us Sign in Subscribe The official LangChain blog. Subscribe now Login Featured Posts Announcing LangChain Hub Using LangSmith to Support Fine-tuning Announcing LangSmith, a unified platform for debugging, testing, evaluating, and monitoring your LLM applications Sep 20 Peering Into the Soul of AI Decision-Making with LangSmith 10 min read Sep 20 LangChain + Docugami Webinar: Lessons from Deploying LLMs with LangSmith 3 min read Sep 18 TED AI Hackathon Kickoff (and projects we’d love to see) 2 min read Sep 12 How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel 6 min read Sep 12 OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change 4 min read Load more LangChain © 2023 Sign up Powered by GhostAction: ``` { "action": "Final Answer", "action_input": "The LangChain blog features posts on topics such as using LangSmith for fine-tuning, AI
The structured tool chat agent is capable of using multi-input tools.
The structured tool chat agent is capable of using multi-input tools. ->: = ChatOpenAI(temperature=0)llm_with_stop = llm.bind(stop=["Observation"])from langchain.agents.output_parsers import JSONAgentOutputParserfrom langchain.agents.format_scratchpad import format_log_to_stragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps']),} | prompt | llm_with_stop | JSONAgentOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)response = await agent_executor.ainvoke({"input": "Browse to blog.langchain.dev and summarize the text, please."})print(response['output']) > Entering new AgentExecutor chain... Action: ``` { "action": "navigate_browser", "action_input": { "url": "https://blog.langchain.dev" } } ``` Navigating to https://blog.langchain.dev returned status code 200Action: ``` { "action": "extract_text", "action_input": {} } ``` LangChain LangChain Home GitHub Docs By LangChain Release Notes Write with Us Sign in Subscribe The official LangChain blog. Subscribe now Login Featured Posts Announcing LangChain Hub Using LangSmith to Support Fine-tuning Announcing LangSmith, a unified platform for debugging, testing, evaluating, and monitoring your LLM applications Sep 20 Peering Into the Soul of AI Decision-Making with LangSmith 10 min read Sep 20 LangChain + Docugami Webinar: Lessons from Deploying LLMs with LangSmith 3 min read Sep 18 TED AI Hackathon Kickoff (and projects we’d love to see) 2 min read Sep 12 How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel 6 min read Sep 12 OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change 4 min read Load more LangChain © 2023 Sign up Powered by GhostAction: ``` { "action": "Final Answer", "action_input": "The LangChain blog features posts on topics such as using LangSmith for fine-tuning, AI
2,413
such as using LangSmith for fine-tuning, AI decision-making with LangSmith, deploying LLMs with LangSmith, and more. It also includes information on LangChain Hub and upcoming webinars. LangChain is a platform for debugging, testing, evaluating, and monitoring LLM applications." } ``` > Finished chain. The LangChain blog features posts on topics such as using LangSmith for fine-tuning, AI decision-making with LangSmith, deploying LLMs with LangSmith, and more. It also includes information on LangChain Hub and upcoming webinars. LangChain is a platform for debugging, testing, evaluating, and monitoring LLM applications.Use off the shelf agent‚Äãllm = ChatOpenAI(temperature=0) # Also works well with Anthropic modelsagent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)response = await agent_chain.ainvoke({"input": "Browse to blog.langchain.dev and summarize the text, please."})print(response['output']) > Entering new AgentExecutor chain... Action: ``` { "action": "navigate_browser", "action_input": { "url": "https://blog.langchain.dev" } } ``` Observation: Navigating to https://blog.langchain.dev returned status code 200 Thought:I have successfully navigated to the blog.langchain.dev website. Now I need to extract the text from the webpage to summarize it. Action: ``` { "action": "extract_text", "action_input": {} } ``` Observation: LangChain LangChain Home GitHub Docs By LangChain Release Notes Write with Us Sign in Subscribe The official LangChain blog. Subscribe now Login Featured Posts Announcing LangChain Hub Using LangSmith to Support Fine-tuning Announcing LangSmith, a unified platform for debugging, testing, evaluating, and monitoring your LLM applications Sep 20 Peering Into the Soul of AI Decision-Making with LangSmith 10 min read Sep 20 LangChain + Docugami Webinar: Lessons from Deploying LLMs with
The structured tool chat agent is capable of using multi-input tools.
The structured tool chat agent is capable of using multi-input tools. ->: such as using LangSmith for fine-tuning, AI decision-making with LangSmith, deploying LLMs with LangSmith, and more. It also includes information on LangChain Hub and upcoming webinars. LangChain is a platform for debugging, testing, evaluating, and monitoring LLM applications." } ``` > Finished chain. The LangChain blog features posts on topics such as using LangSmith for fine-tuning, AI decision-making with LangSmith, deploying LLMs with LangSmith, and more. It also includes information on LangChain Hub and upcoming webinars. LangChain is a platform for debugging, testing, evaluating, and monitoring LLM applications.Use off the shelf agent‚Äãllm = ChatOpenAI(temperature=0) # Also works well with Anthropic modelsagent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)response = await agent_chain.ainvoke({"input": "Browse to blog.langchain.dev and summarize the text, please."})print(response['output']) > Entering new AgentExecutor chain... Action: ``` { "action": "navigate_browser", "action_input": { "url": "https://blog.langchain.dev" } } ``` Observation: Navigating to https://blog.langchain.dev returned status code 200 Thought:I have successfully navigated to the blog.langchain.dev website. Now I need to extract the text from the webpage to summarize it. Action: ``` { "action": "extract_text", "action_input": {} } ``` Observation: LangChain LangChain Home GitHub Docs By LangChain Release Notes Write with Us Sign in Subscribe The official LangChain blog. Subscribe now Login Featured Posts Announcing LangChain Hub Using LangSmith to Support Fine-tuning Announcing LangSmith, a unified platform for debugging, testing, evaluating, and monitoring your LLM applications Sep 20 Peering Into the Soul of AI Decision-Making with LangSmith 10 min read Sep 20 LangChain + Docugami Webinar: Lessons from Deploying LLMs with
2,414
Webinar: Lessons from Deploying LLMs with LangSmith 3 min read Sep 18 TED AI Hackathon Kickoff (and projects we’d love to see) 2 min read Sep 12 How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel 6 min read Sep 12 OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change 4 min read Load more LangChain © 2023 Sign up Powered by Ghost Thought:I have successfully navigated to the blog.langchain.dev website. The text on the webpage includes featured posts such as "Announcing LangChain Hub," "Using LangSmith to Support Fine-tuning," "Peering Into the Soul of AI Decision-Making with LangSmith," "LangChain + Docugami Webinar: Lessons from Deploying LLMs with LangSmith," "TED AI Hackathon Kickoff (and projects we’d love to see)," "How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel," and "OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change." There are also links to other pages on the website. > Finished chain. I have successfully navigated to the blog.langchain.dev website. The text on the webpage includes featured posts such as "Announcing LangChain Hub," "Using LangSmith to Support Fine-tuning," "Peering Into the Soul of AI Decision-Making with LangSmith," "LangChain + Docugami Webinar: Lessons from Deploying LLMs with LangSmith," "TED AI Hackathon Kickoff (and projects we’d love to see)," "How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel," and "OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change." There are also links to other pages on the website.PreviousSelf-ask with searchNextXML AgentInitialize ToolsUse LCELUse off the shelf agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The structured tool chat agent is capable of using multi-input tools.
The structured tool chat agent is capable of using multi-input tools. ->: Webinar: Lessons from Deploying LLMs with LangSmith 3 min read Sep 18 TED AI Hackathon Kickoff (and projects we’d love to see) 2 min read Sep 12 How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel 6 min read Sep 12 OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change 4 min read Load more LangChain © 2023 Sign up Powered by Ghost Thought:I have successfully navigated to the blog.langchain.dev website. The text on the webpage includes featured posts such as "Announcing LangChain Hub," "Using LangSmith to Support Fine-tuning," "Peering Into the Soul of AI Decision-Making with LangSmith," "LangChain + Docugami Webinar: Lessons from Deploying LLMs with LangSmith," "TED AI Hackathon Kickoff (and projects we’d love to see)," "How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel," and "OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change." There are also links to other pages on the website. > Finished chain. I have successfully navigated to the blog.langchain.dev website. The text on the webpage includes featured posts such as "Announcing LangChain Hub," "Using LangSmith to Support Fine-tuning," "Peering Into the Soul of AI Decision-Making with LangSmith," "LangChain + Docugami Webinar: Lessons from Deploying LLMs with LangSmith," "TED AI Hackathon Kickoff (and projects we’d love to see)," "How to Safely Query Enterprise Data with LangChain Agents + SQL + OpenAI + Gretel," and "OpaquePrompts x LangChain: Enhance the privacy of your LangChain application with just one code change." There are also links to other pages on the website.PreviousSelf-ask with searchNextXML AgentInitialize ToolsUse LCELUse off the shelf agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,415
Conversational | 🦜️🔗 Langchain
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well. ->: Conversational | 🦜️🔗 Langchain
2,416
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesConversationalOn this pageConversationalThis walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.If we compare it to the standard ReAct agent, the main difference is the prompt.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesConversationalOn this pageConversationalThis walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.If we compare it to the standard ReAct agent, the main difference is the prompt.
2,417
We want it to be much more conversational.from langchain.agents import Toolfrom langchain.agents import AgentTypefrom langchain.memory import ConversationBufferMemoryfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agentsearch = SerpAPIWrapper()tools = [ Tool( name="Current Search", func=search.run, description="useful for when you need to answer questions about current events or the current state of the world" ),]llm=OpenAI(temperature=0)Using LCEL‚ÄãWe will first show how to create this agent using LCELfrom langchain.tools.render import render_text_descriptionfrom langchain.agents.output_parsers import ReActSingleInputOutputParserfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain import hubprompt = hub.pull("hwchase17/react-chat")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)llm_with_stop = llm.bind(stop=["\nObservation"])agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps']), "chat_history": lambda x: x["chat_history"]} | prompt | llm_with_stop | ReActSingleInputOutputParser()from langchain.agents import AgentExecutormemory = ConversationBufferMemory(memory_key="chat_history")agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)agent_executor.invoke({"input": "hi, i am bob"})['output'] > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No Final Answer: Hi Bob, nice to meet you! How can I help you today? > Finished chain. 'Hi Bob, nice to meet you! How can I help you today?'agent_executor.invoke({"input": "whats my name?"})['output'] > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No Final Answer: Your name is Bob. > Finished chain. 'Your name is
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well. ->: We want it to be much more conversational.from langchain.agents import Toolfrom langchain.agents import AgentTypefrom langchain.memory import ConversationBufferMemoryfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agentsearch = SerpAPIWrapper()tools = [ Tool( name="Current Search", func=search.run, description="useful for when you need to answer questions about current events or the current state of the world" ),]llm=OpenAI(temperature=0)Using LCEL‚ÄãWe will first show how to create this agent using LCELfrom langchain.tools.render import render_text_descriptionfrom langchain.agents.output_parsers import ReActSingleInputOutputParserfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain import hubprompt = hub.pull("hwchase17/react-chat")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)llm_with_stop = llm.bind(stop=["\nObservation"])agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps']), "chat_history": lambda x: x["chat_history"]} | prompt | llm_with_stop | ReActSingleInputOutputParser()from langchain.agents import AgentExecutormemory = ConversationBufferMemory(memory_key="chat_history")agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)agent_executor.invoke({"input": "hi, i am bob"})['output'] > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No Final Answer: Hi Bob, nice to meet you! How can I help you today? > Finished chain. 'Hi Bob, nice to meet you! How can I help you today?'agent_executor.invoke({"input": "whats my name?"})['output'] > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No Final Answer: Your name is Bob. > Finished chain. 'Your name is
2,418
is Bob. > Finished chain. 'Your name is Bob.'agent_executor.invoke({"input": "what are some movies showing 9/21/2023?"})['output'] > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Current Search Action Input: Movies showing 9/21/2023['September 2023 Movies: The Creator • Dumb Money • Expend4bles • The Kill Room • The Inventor • The Equalizer 3 • PAW Patrol: The Mighty Movie, ...'] Do I need to use a tool? No Final Answer: According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie. > Finished chain. 'According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.'Use the off-the-shelf agent​We can also create this agent using the off-the-shelf agent classagent_executor = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)Use a chat model​We can also use a chat model here. The main difference here is in the prompts used.from langchain.chat_models import ChatOpenAIfrom langchain import hubprompt = hub.pull("hwchase17/react-chat-json")chat_model = ChatOpenAI(temperature=0, model='gpt-4')prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)chat_model_with_stop = chat_model.bind(stop=["\nObservation"])from langchain.agents.output_parsers import JSONAgentOutputParserfrom langchain.agents.format_scratchpad import format_log_to_messages# We need some extra steering, or the chat model forgets how to respond sometimesTEMPLATE_TOOL_RESPONSE = """TOOL RESPONSE: ---------------------{observation}USER'S INPUT--------------------Okay, so what is the response to my last comment? If using information obtained from the tools you
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well. ->: is Bob. > Finished chain. 'Your name is Bob.'agent_executor.invoke({"input": "what are some movies showing 9/21/2023?"})['output'] > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Current Search Action Input: Movies showing 9/21/2023['September 2023 Movies: The Creator • Dumb Money • Expend4bles • The Kill Room • The Inventor • The Equalizer 3 • PAW Patrol: The Mighty Movie, ...'] Do I need to use a tool? No Final Answer: According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie. > Finished chain. 'According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.'Use the off-the-shelf agent​We can also create this agent using the off-the-shelf agent classagent_executor = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)Use a chat model​We can also use a chat model here. The main difference here is in the prompts used.from langchain.chat_models import ChatOpenAIfrom langchain import hubprompt = hub.pull("hwchase17/react-chat-json")chat_model = ChatOpenAI(temperature=0, model='gpt-4')prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)chat_model_with_stop = chat_model.bind(stop=["\nObservation"])from langchain.agents.output_parsers import JSONAgentOutputParserfrom langchain.agents.format_scratchpad import format_log_to_messages# We need some extra steering, or the chat model forgets how to respond sometimesTEMPLATE_TOOL_RESPONSE = """TOOL RESPONSE: ---------------------{observation}USER'S INPUT--------------------Okay, so what is the response to my last comment? If using information obtained from the tools you
2,419
If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else - even if you just want to respond to the user. Do NOT respond with anything except a JSON snippet no matter what!"""agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_messages(x['intermediate_steps'], template_tool_response=TEMPLATE_TOOL_RESPONSE), "chat_history": lambda x: x["chat_history"],} | prompt | chat_model_with_stop | JSONAgentOutputParser()from langchain.agents import AgentExecutormemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)agent_executor.invoke({"input": "hi, i am bob"})['output'] > Entering new AgentExecutor chain... ```json { "action": "Final Answer", "action_input": "Hello Bob, how can I assist you today?" } ``` > Finished chain. 'Hello Bob, how can I assist you today?'agent_executor.invoke({"input": "whats my name?"})['output'] > Entering new AgentExecutor chain... ```json { "action": "Final Answer", "action_input": "Your name is Bob." } ``` > Finished chain. 'Your name is Bob.'agent_executor.invoke({"input": "what are some movies showing 9/21/2023?"})['output'] > Entering new AgentExecutor chain... ```json { "action": "Current Search", "action_input": "movies showing on 9/21/2023" } ```['September 2023 Movies: The Creator • Dumb Money • Expend4bles • The Kill Room • The Inventor • The Equalizer 3 • PAW Patrol: The Mighty Movie, ...']```json { "action": "Final Answer", "action_input": "Some movies that are showing on 9/21/2023 include 'The Creator', 'Dumb Money', 'Expend4bles', 'The
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well. ->: If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else - even if you just want to respond to the user. Do NOT respond with anything except a JSON snippet no matter what!"""agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_messages(x['intermediate_steps'], template_tool_response=TEMPLATE_TOOL_RESPONSE), "chat_history": lambda x: x["chat_history"],} | prompt | chat_model_with_stop | JSONAgentOutputParser()from langchain.agents import AgentExecutormemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)agent_executor.invoke({"input": "hi, i am bob"})['output'] > Entering new AgentExecutor chain... ```json { "action": "Final Answer", "action_input": "Hello Bob, how can I assist you today?" } ``` > Finished chain. 'Hello Bob, how can I assist you today?'agent_executor.invoke({"input": "whats my name?"})['output'] > Entering new AgentExecutor chain... ```json { "action": "Final Answer", "action_input": "Your name is Bob." } ``` > Finished chain. 'Your name is Bob.'agent_executor.invoke({"input": "what are some movies showing 9/21/2023?"})['output'] > Entering new AgentExecutor chain... ```json { "action": "Current Search", "action_input": "movies showing on 9/21/2023" } ```['September 2023 Movies: The Creator • Dumb Money • Expend4bles • The Kill Room • The Inventor • The Equalizer 3 • PAW Patrol: The Mighty Movie, ...']```json { "action": "Final Answer", "action_input": "Some movies that are showing on 9/21/2023 include 'The Creator', 'Dumb Money', 'Expend4bles', 'The
2,420
'The Creator', 'Dumb Money', 'Expend4bles', 'The Kill Room', 'The Inventor', 'The Equalizer 3', and 'PAW Patrol: The Mighty Movie'." } ``` > Finished chain. "Some movies that are showing on 9/21/2023 include 'The Creator', 'Dumb Money', 'Expend4bles', 'The Kill Room', 'The Inventor', 'The Equalizer 3', and 'PAW Patrol: The Mighty Movie'."We can also initialize the agent executor with a predefined agent typefrom langchain.memory import ConversationBufferMemoryfrom langchain.chat_models import ChatOpenAImemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)PreviousAgent TypesNextOpenAI functionsUsing LCELUse the off-the-shelf agentUse a chat modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well. ->: 'The Creator', 'Dumb Money', 'Expend4bles', 'The Kill Room', 'The Inventor', 'The Equalizer 3', and 'PAW Patrol: The Mighty Movie'." } ``` > Finished chain. "Some movies that are showing on 9/21/2023 include 'The Creator', 'Dumb Money', 'Expend4bles', 'The Kill Room', 'The Inventor', 'The Equalizer 3', and 'PAW Patrol: The Mighty Movie'."We can also initialize the agent executor with a predefined agent typefrom langchain.memory import ConversationBufferMemoryfrom langchain.chat_models import ChatOpenAImemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)PreviousAgent TypesNextOpenAI functionsUsing LCELUse the off-the-shelf agentUse a chat modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,421
OpenAI Multi Functions Agent | 🦜️🔗 Langchain
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: OpenAI Multi Functions Agent | 🦜️🔗 Langchain
2,422
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesOpenAI Multi Functions AgentOn this pageOpenAI Multi Functions AgentThis notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.Install openai, google-search-results packages which are required as the LangChain packages call them internally.pip install openai google-search-resultsfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIThe agent is given the ability to perform search functionalities with the respective toolSerpAPIWrapper:This initializes the SerpAPIWrapper for search functionality (search).import getpassimport osos.environ["SERPAPI_API_KEY"] = getpass.getpass() ········# Initialize the OpenAI language model# Replace <your_api_key> in openai_api_key="<your_api_key>" with your actual OpenAI key.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")# Initialize the SerpAPIWrapper for search functionality# Replace <your_api_key> in serpapi_api_key="<your_api_key>" with your actual SerpAPI key.search = SerpAPIWrapper()# Define a list of tools offered by the agenttools = [ Tool( name="Search", func=search.run, description="Useful when you need to answer questions about current events. You should ask targeted questions.", ),]mrkl = initialize_agent( tools, llm,
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesOpenAI Multi Functions AgentOn this pageOpenAI Multi Functions AgentThis notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.Install openai, google-search-results packages which are required as the LangChain packages call them internally.pip install openai google-search-resultsfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIThe agent is given the ability to perform search functionalities with the respective toolSerpAPIWrapper:This initializes the SerpAPIWrapper for search functionality (search).import getpassimport osos.environ["SERPAPI_API_KEY"] = getpass.getpass() ········# Initialize the OpenAI language model# Replace <your_api_key> in openai_api_key="<your_api_key>" with your actual OpenAI key.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")# Initialize the SerpAPIWrapper for search functionality# Replace <your_api_key> in serpapi_api_key="<your_api_key>" with your actual SerpAPI key.search = SerpAPIWrapper()# Define a list of tools offered by the agenttools = [ Tool( name="Search", func=search.run, description="Useful when you need to answer questions about current events. You should ask targeted questions.", ),]mrkl = initialize_agent( tools, llm,
2,423
),]mrkl = initialize_agent( tools, llm, agent=AgentType.OPENAI_MULTI_FUNCTIONS, verbose=True)# Do this so we can see exactly what's going on under the hoodfrom langchain.globals import set_debugset_debug(True)mrkl.run("What is the weather in LA and SF?") [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "What is the weather in LA and SF?" } [llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in LA and SF?" ] } [llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [2.91s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": null, "message": { "content": "", "additional_kwargs": { "function_call": { "name": "tool_selection", "arguments": "{\n \"actions\": [\n {\n \"action_name\": \"Search\",\n \"action\": {\n \"tool_input\": \"weather in Los Angeles\"\n }\n },\n {\n \"action_name\": \"Search\",\n \"action\": {\n \"tool_input\": \"weather in San Francisco\"\n }\n }\n ]\n}" } }, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 81, "completion_tokens": 75, "total_tokens": 156 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [tool/start] [1:chain:AgentExecutor > 3:tool:Search] Entering Tool run with input: "{'tool_input': 'weather in Los Angeles'}" [tool/end] [1:chain:AgentExecutor > 3:tool:Search] [608.693ms] Exiting Tool run with output: "Mostly cloudy early, then sunshine for the afternoon. High 76F. Winds SW at 5 to 10 mph. Humidity59%." [tool/start] [1:chain:AgentExecutor >
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: ),]mrkl = initialize_agent( tools, llm, agent=AgentType.OPENAI_MULTI_FUNCTIONS, verbose=True)# Do this so we can see exactly what's going on under the hoodfrom langchain.globals import set_debugset_debug(True)mrkl.run("What is the weather in LA and SF?") [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "What is the weather in LA and SF?" } [llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in LA and SF?" ] } [llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [2.91s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": null, "message": { "content": "", "additional_kwargs": { "function_call": { "name": "tool_selection", "arguments": "{\n \"actions\": [\n {\n \"action_name\": \"Search\",\n \"action\": {\n \"tool_input\": \"weather in Los Angeles\"\n }\n },\n {\n \"action_name\": \"Search\",\n \"action\": {\n \"tool_input\": \"weather in San Francisco\"\n }\n }\n ]\n}" } }, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 81, "completion_tokens": 75, "total_tokens": 156 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [tool/start] [1:chain:AgentExecutor > 3:tool:Search] Entering Tool run with input: "{'tool_input': 'weather in Los Angeles'}" [tool/end] [1:chain:AgentExecutor > 3:tool:Search] [608.693ms] Exiting Tool run with output: "Mostly cloudy early, then sunshine for the afternoon. High 76F. Winds SW at 5 to 10 mph. Humidity59%." [tool/start] [1:chain:AgentExecutor >
2,424
[tool/start] [1:chain:AgentExecutor > 4:tool:Search] Entering Tool run with input: "{'tool_input': 'weather in San Francisco'}" [tool/end] [1:chain:AgentExecutor > 4:tool:Search] [517.475ms] Exiting Tool run with output: "Partly cloudy this evening, then becoming cloudy after midnight. Low 53F. Winds WSW at 10 to 20 mph. Humidity83%." [llm/start] [1:chain:AgentExecutor > 5:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in LA and SF?\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in Los Angeles\"\\n }\\n },\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in San Francisco\"\\n }\\n }\\n ]\\n}'}\nFunction: Mostly cloudy early, then sunshine for the afternoon. High 76F. Winds SW at 5 to 10 mph. Humidity59%.\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in Los Angeles\"\\n }\\n },\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in San Francisco\"\\n }\\n }\\n ]\\n}'}\nFunction: Partly cloudy this evening, then becoming cloudy after midnight. Low 53F. Winds WSW at 10 to 20 mph. Humidity83%." ] } [llm/end] [1:chain:AgentExecutor > 5:llm:ChatOpenAI] [2.33s] Exiting LLM run with output: { "generations": [ [ { "text": "The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.", "generation_info": null, "message": { "content": "The weather
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: [tool/start] [1:chain:AgentExecutor > 4:tool:Search] Entering Tool run with input: "{'tool_input': 'weather in San Francisco'}" [tool/end] [1:chain:AgentExecutor > 4:tool:Search] [517.475ms] Exiting Tool run with output: "Partly cloudy this evening, then becoming cloudy after midnight. Low 53F. Winds WSW at 10 to 20 mph. Humidity83%." [llm/start] [1:chain:AgentExecutor > 5:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in LA and SF?\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in Los Angeles\"\\n }\\n },\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in San Francisco\"\\n }\\n }\\n ]\\n}'}\nFunction: Mostly cloudy early, then sunshine for the afternoon. High 76F. Winds SW at 5 to 10 mph. Humidity59%.\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in Los Angeles\"\\n }\\n },\\n {\\n \"action_name\": \"Search\",\\n \"action\": {\\n \"tool_input\": \"weather in San Francisco\"\\n }\\n }\\n ]\\n}'}\nFunction: Partly cloudy this evening, then becoming cloudy after midnight. Low 53F. Winds WSW at 10 to 20 mph. Humidity83%." ] } [llm/end] [1:chain:AgentExecutor > 5:llm:ChatOpenAI] [2.33s] Exiting LLM run with output: { "generations": [ [ { "text": "The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.", "generation_info": null, "message": { "content": "The weather
2,425
"message": { "content": "The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.", "additional_kwargs": {}, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 307, "completion_tokens": 54, "total_tokens": 361 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [chain/end] [1:chain:AgentExecutor] [6.37s] Exiting Chain run with output: { "output": "The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%." } 'The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.'Configuring max iteration behavior​To make sure that our agent doesn't get stuck in excessively long loops, we can set max_iterations. We can also set an early stopping method, which will determine our agent's behavior once the number of max iterations is hit. By default, the early stopping uses method force which just returns that constant string. Alternatively, you could specify method generate which then does one FINAL pass through the LLM to generate an output.mrkl = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, max_iterations=2, early_stopping_method="generate",)mrkl.run("What is the weather in NYC today, yesterday, and the day before?") [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "What is the weather in NYC today, yesterday, and
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: "message": { "content": "The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.", "additional_kwargs": {}, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 307, "completion_tokens": 54, "total_tokens": 361 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [chain/end] [1:chain:AgentExecutor] [6.37s] Exiting Chain run with output: { "output": "The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%." } 'The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.'Configuring max iteration behavior​To make sure that our agent doesn't get stuck in excessively long loops, we can set max_iterations. We can also set an early stopping method, which will determine our agent's behavior once the number of max iterations is hit. By default, the early stopping uses method force which just returns that constant string. Alternatively, you could specify method generate which then does one FINAL pass through the LLM to generate an output.mrkl = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, max_iterations=2, early_stopping_method="generate",)mrkl.run("What is the weather in NYC today, yesterday, and the day before?") [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "What is the weather in NYC today, yesterday, and
2,426
"What is the weather in NYC today, yesterday, and the day before?" } [llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in NYC today, yesterday, and the day before?" ] } [llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.27s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": null, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "Search", "arguments": "{\n \"query\": \"weather in NYC today\"\n}" } } } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 79, "completion_tokens": 17, "total_tokens": 96 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [tool/start] [1:chain:AgentExecutor > 3:tool:Search] Entering Tool run with input: "{'query': 'weather in NYC today'}" [tool/end] [1:chain:AgentExecutor > 3:tool:Search] [3.84s] Exiting Tool run with output: "10:00 am · Feels Like85° · WindSE 4 mph · Humidity78% · UV Index3 of 11 · Cloud Cover81% · Rain Amount0 in ..." [llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in NYC today, yesterday, and the day before?\nAI: {'name': 'Search', 'arguments': '{\\n \"query\": \"weather in NYC today\"\\n}'}\nFunction: 10:00 am · Feels Like85° ·
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: "What is the weather in NYC today, yesterday, and the day before?" } [llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in NYC today, yesterday, and the day before?" ] } [llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.27s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": null, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "Search", "arguments": "{\n \"query\": \"weather in NYC today\"\n}" } } } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 79, "completion_tokens": 17, "total_tokens": 96 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [tool/start] [1:chain:AgentExecutor > 3:tool:Search] Entering Tool run with input: "{'query': 'weather in NYC today'}" [tool/end] [1:chain:AgentExecutor > 3:tool:Search] [3.84s] Exiting Tool run with output: "10:00 am · Feels Like85° · WindSE 4 mph · Humidity78% · UV Index3 of 11 · Cloud Cover81% · Rain Amount0 in ..." [llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in NYC today, yesterday, and the day before?\nAI: {'name': 'Search', 'arguments': '{\\n \"query\": \"weather in NYC today\"\\n}'}\nFunction: 10:00 am · Feels Like85° ·
2,427
10:00 am · Feels Like85° · WindSE 4 mph · Humidity78% · UV Index3 of 11 · Cloud Cover81% · Rain Amount0 in ..." ] } [llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [1.24s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": null, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "Search", "arguments": "{\n \"query\": \"weather in NYC yesterday\"\n}" } } } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 142, "completion_tokens": 17, "total_tokens": 159 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [tool/start] [1:chain:AgentExecutor > 5:tool:Search] Entering Tool run with input: "{'query': 'weather in NYC yesterday'}" [tool/end] [1:chain:AgentExecutor > 5:tool:Search] [1.15s] Exiting Tool run with output: "New York Temperature Yesterday. Maximum temperature yesterday: 81 °F (at 1:51 pm) Minimum temperature yesterday: 72 °F (at 7:17 pm) Average temperature ..." [llm/start] [1:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in NYC today, yesterday, and the day before?\nAI: {'name': 'Search', 'arguments': '{\\n \"query\": \"weather in NYC today\"\\n}'}\nFunction: 10:00 am · Feels Like85° · WindSE 4 mph · Humidity78% · UV Index3 of 11 · Cloud Cover81% · Rain Amount0 in ...\nAI: {'name': 'Search', 'arguments': '{\\n \"query\": \"weather in NYC
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: 10:00 am · Feels Like85° · WindSE 4 mph · Humidity78% · UV Index3 of 11 · Cloud Cover81% · Rain Amount0 in ..." ] } [llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [1.24s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": null, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "Search", "arguments": "{\n \"query\": \"weather in NYC yesterday\"\n}" } } } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 142, "completion_tokens": 17, "total_tokens": 159 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [tool/start] [1:chain:AgentExecutor > 5:tool:Search] Entering Tool run with input: "{'query': 'weather in NYC yesterday'}" [tool/end] [1:chain:AgentExecutor > 5:tool:Search] [1.15s] Exiting Tool run with output: "New York Temperature Yesterday. Maximum temperature yesterday: 81 °F (at 1:51 pm) Minimum temperature yesterday: 72 °F (at 7:17 pm) Average temperature ..." [llm/start] [1:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: What is the weather in NYC today, yesterday, and the day before?\nAI: {'name': 'Search', 'arguments': '{\\n \"query\": \"weather in NYC today\"\\n}'}\nFunction: 10:00 am · Feels Like85° · WindSE 4 mph · Humidity78% · UV Index3 of 11 · Cloud Cover81% · Rain Amount0 in ...\nAI: {'name': 'Search', 'arguments': '{\\n \"query\": \"weather in NYC
2,428
'arguments': '{\\n \"query\": \"weather in NYC yesterday\"\\n}'}\nFunction: New York Temperature Yesterday. Maximum temperature yesterday: 81 °F (at 1:51 pm) Minimum temperature yesterday: 72 °F (at 7:17 pm) Average temperature ..." ] } [llm/end] [1:llm:ChatOpenAI] [2.68s] Exiting LLM run with output: { "generations": [ [ { "text": "Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information.", "generation_info": null, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information.", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 160, "completion_tokens": 91, "total_tokens": 251 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [chain/end] [1:chain:AgentExecutor] [10.18s] Exiting Chain run with output: { "output": "Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: 'arguments': '{\\n \"query\": \"weather in NYC yesterday\"\\n}'}\nFunction: New York Temperature Yesterday. Maximum temperature yesterday: 81 °F (at 1:51 pm) Minimum temperature yesterday: 72 °F (at 7:17 pm) Average temperature ..." ] } [llm/end] [1:llm:ChatOpenAI] [2.68s] Exiting LLM run with output: { "generations": [ [ { "text": "Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information.", "generation_info": null, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information.", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 160, "completion_tokens": 91, "total_tokens": 251 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [chain/end] [1:chain:AgentExecutor] [10.18s] Exiting Chain run with output: { "output": "Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected
2,429
is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information." } 'Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information.'Notice that we never get around to looking up the weather the day before yesterday, due to hitting our max_iterations limit.PreviousOpenAI functionsNextReActConfiguring max iteration behaviorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.
This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model. ->: is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information." } 'Today in NYC, the weather is currently 85°F with a southeast wind of 4 mph. The humidity is at 78% and there is 81% cloud cover. There is no rain expected today.\n\nYesterday in NYC, the maximum temperature was 81°F at 1:51 pm, and the minimum temperature was 72°F at 7:17 pm.\n\nFor the day before yesterday, I do not have the specific weather information.'Notice that we never get around to looking up the weather the day before yesterday, due to hitting our max_iterations limit.PreviousOpenAI functionsNextReActConfiguring max iteration behaviorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,430
Self-ask with search | 🦜️🔗 Langchain
This walkthrough showcases the self-ask with search chain.
This walkthrough showcases the self-ask with search chain. ->: Self-ask with search | 🦜️🔗 Langchain
2,431
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesSelf-ask with searchOn this pageSelf-ask with searchThis walkthrough showcases the self-ask with search chain.from langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypellm = OpenAI(temperature=0)search = SerpAPIWrapper()tools = [ Tool( name="Intermediate Answer", func=search.run, description="useful for when you need to ask with search", )]Using LangChain Expression Language​First we will show how to construct this agent from components using LangChain Expression Languagefrom langchain.agents.output_parsers import SelfAskOutputParserfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain import hubprompt = hub.pull("hwchase17/self-ask-with-search")llm_with_stop = llm.bind(stop=["\nIntermediate answer:"])agent = { "input": lambda x: x["input"], # Use some custom observation_prefix/llm_prefix for formatting "agent_scratchpad": lambda x: format_log_to_str( x['intermediate_steps'], observation_prefix="\nIntermediate answer: ", llm_prefix="", ),} | prompt | llm_with_stop | SelfAskOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "What is the hometown of the reigning men's U.S. Open champion?"}) > Entering new AgentExecutor chain...
This walkthrough showcases the self-ask with search chain.
This walkthrough showcases the self-ask with search chain. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesSelf-ask with searchOn this pageSelf-ask with searchThis walkthrough showcases the self-ask with search chain.from langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypellm = OpenAI(temperature=0)search = SerpAPIWrapper()tools = [ Tool( name="Intermediate Answer", func=search.run, description="useful for when you need to ask with search", )]Using LangChain Expression Language​First we will show how to construct this agent from components using LangChain Expression Languagefrom langchain.agents.output_parsers import SelfAskOutputParserfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain import hubprompt = hub.pull("hwchase17/self-ask-with-search")llm_with_stop = llm.bind(stop=["\nIntermediate answer:"])agent = { "input": lambda x: x["input"], # Use some custom observation_prefix/llm_prefix for formatting "agent_scratchpad": lambda x: format_log_to_str( x['intermediate_steps'], observation_prefix="\nIntermediate answer: ", llm_prefix="", ),} | prompt | llm_with_stop | SelfAskOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "What is the hometown of the reigning men's U.S. Open champion?"}) > Entering new AgentExecutor chain...
2,432
> Entering new AgentExecutor chain... Yes. Follow up: Who is the reigning men's U.S. Open champion?Men's US Open Tennis Champions Novak Djokovic earned his 24th major singles title against 2021 US Open champion Daniil Medvedev, 6-3, 7-6 (7-5), 6-3. The victory ties the Serbian player with the legendary Margaret Court for the most Grand Slam wins across both men's and women's singles. Follow up: Where is Novak Djokovic from?Belgrade, Serbia So the final answer is: Belgrade, Serbia > Finished chain. {'input': "What is the hometown of the reigning men's U.S. Open champion?", 'output': 'Belgrade, Serbia'}Use off-the-shelf agent​self_ask_with_search = initialize_agent( tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run( "What is the hometown of the reigning men's U.S. Open champion?") > Entering new AgentExecutor chain... Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Men's US Open Tennis Champions Novak Djokovic earned his 24th major singles title against 2021 US Open champion Daniil Medvedev, 6-3, 7-6 (7-5), 6-3. The victory ties the Serbian player with the legendary Margaret Court for the most Grand Slam wins across both men's and women's singles. Follow up: Where is Novak Djokovic from? Intermediate answer: Belgrade, Serbia So the final answer is: Belgrade, Serbia > Finished chain. 'Belgrade, Serbia'PreviousReAct document storeNextStructured tool chatUsing LangChain Expression LanguageUse off-the-shelf agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This walkthrough showcases the self-ask with search chain.
This walkthrough showcases the self-ask with search chain. ->: > Entering new AgentExecutor chain... Yes. Follow up: Who is the reigning men's U.S. Open champion?Men's US Open Tennis Champions Novak Djokovic earned his 24th major singles title against 2021 US Open champion Daniil Medvedev, 6-3, 7-6 (7-5), 6-3. The victory ties the Serbian player with the legendary Margaret Court for the most Grand Slam wins across both men's and women's singles. Follow up: Where is Novak Djokovic from?Belgrade, Serbia So the final answer is: Belgrade, Serbia > Finished chain. {'input': "What is the hometown of the reigning men's U.S. Open champion?", 'output': 'Belgrade, Serbia'}Use off-the-shelf agent​self_ask_with_search = initialize_agent( tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run( "What is the hometown of the reigning men's U.S. Open champion?") > Entering new AgentExecutor chain... Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Men's US Open Tennis Champions Novak Djokovic earned his 24th major singles title against 2021 US Open champion Daniil Medvedev, 6-3, 7-6 (7-5), 6-3. The victory ties the Serbian player with the legendary Margaret Court for the most Grand Slam wins across both men's and women's singles. Follow up: Where is Novak Djokovic from? Intermediate answer: Belgrade, Serbia So the final answer is: Belgrade, Serbia > Finished chain. 'Belgrade, Serbia'PreviousReAct document storeNextStructured tool chatUsing LangChain Expression LanguageUse off-the-shelf agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,433
XML Agent | 🦜️🔗 Langchain
Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting.
Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting. ->: XML Agent | 🦜️🔗 Langchain
2,434
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesXML AgentOn this pageXML AgentSome language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting. Initialize the tools​We will initialize some fake tools for demo purposesfrom langchain.agents import tool@tooldef search(query: str) -> str: """Search things about current events.""" return "32 degrees"tools = [search]from langchain.chat_models import ChatAnthropicmodel = ChatAnthropic(model="claude-2")Use LangChain Expression Language​We will first show how to create this agent using LangChain Expression Languagefrom langchain.tools.render import render_text_descriptionfrom langchain.agents.output_parsers import XMLAgentOutputParserfrom langchain.agents.format_scratchpad import format_xmlfrom langchain import hubprompt = hub.pull("hwchase17/xml-agent")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)llm_with_stop = model.bind(stop=["</tool_input>"])agent = { "question": lambda x: x["question"], "agent_scratchpad": lambda x: format_xml(x['intermediate_steps']),} | prompt | llm_with_stop | XMLAgentOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"question": "whats the weather in New york?"}) > Entering new AgentExecutor chain... <tool>search</tool>
Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting.
Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesXML AgentOn this pageXML AgentSome language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting. Initialize the tools​We will initialize some fake tools for demo purposesfrom langchain.agents import tool@tooldef search(query: str) -> str: """Search things about current events.""" return "32 degrees"tools = [search]from langchain.chat_models import ChatAnthropicmodel = ChatAnthropic(model="claude-2")Use LangChain Expression Language​We will first show how to create this agent using LangChain Expression Languagefrom langchain.tools.render import render_text_descriptionfrom langchain.agents.output_parsers import XMLAgentOutputParserfrom langchain.agents.format_scratchpad import format_xmlfrom langchain import hubprompt = hub.pull("hwchase17/xml-agent")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)llm_with_stop = model.bind(stop=["</tool_input>"])agent = { "question": lambda x: x["question"], "agent_scratchpad": lambda x: format_xml(x['intermediate_steps']),} | prompt | llm_with_stop | XMLAgentOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"question": "whats the weather in New york?"}) > Entering new AgentExecutor chain... <tool>search</tool>
2,435
AgentExecutor chain... <tool>search</tool> <tool_input>weather in new york32 degrees <tool>search</tool> <tool_input>weather in new york32 degrees <final_answer> The weather in New York is 32 degrees. </final_answer> > Finished chain. {'question': 'whats the weather in New york?', 'output': '\nThe weather in New York is 32 degrees.\n'}Use off-the-shelf agent​from langchain.chains import LLMChainfrom langchain.agents import XMLAgentchain = LLMChain( llm=model, prompt=XMLAgent.get_default_prompt(), output_parser=XMLAgent.get_default_output_parser())agent = XMLAgent(tools=tools, llm_chain=chain)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "whats the weather in New york?"}) > Entering new AgentExecutor chain... <tool>search</tool> <tool_input>weather in new york32 degrees <final_answer>The weather in New York is 32 degrees > Finished chain. {'input': 'whats the weather in New york?', 'output': 'The weather in New York is 32 degrees'}PreviousStructured tool chatNextAdd Memory to OpenAI Functions AgentInitialize the toolsUse LangChain Expression LanguageUse off-the-shelf agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting.
Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. This goes over how to use an agent that uses XML when prompting. ->: AgentExecutor chain... <tool>search</tool> <tool_input>weather in new york32 degrees <tool>search</tool> <tool_input>weather in new york32 degrees <final_answer> The weather in New York is 32 degrees. </final_answer> > Finished chain. {'question': 'whats the weather in New york?', 'output': '\nThe weather in New York is 32 degrees.\n'}Use off-the-shelf agent​from langchain.chains import LLMChainfrom langchain.agents import XMLAgentchain = LLMChain( llm=model, prompt=XMLAgent.get_default_prompt(), output_parser=XMLAgent.get_default_output_parser())agent = XMLAgent(tools=tools, llm_chain=chain)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "whats the weather in New york?"}) > Entering new AgentExecutor chain... <tool>search</tool> <tool_input>weather in new york32 degrees <final_answer>The weather in New York is 32 degrees > Finished chain. {'input': 'whats the weather in New york?', 'output': 'The weather in New York is 32 degrees'}PreviousStructured tool chatNextAdd Memory to OpenAI Functions AgentInitialize the toolsUse LangChain Expression LanguageUse off-the-shelf agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,436
ReAct | 🦜️🔗 Langchain
This walkthrough showcases using an agent to implement the ReAct logic.
This walkthrough showcases using an agent to implement the ReAct logic. ->: ReAct | 🦜️🔗 Langchain
2,437
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesReActOn this pageReActThis walkthrough showcases using an agent to implement the ReAct logic.from langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.llms import OpenAIFirst, let's load the language model we're going to use to control the agent.llm = OpenAI(temperature=0)Next, let's load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.tools = load_tools(["serpapi", "llm-math"], llm=llm)Using LCEL​We will first show how to create the agent using LCELfrom langchain.tools.render import render_text_descriptionfrom langchain.agents.output_parsers import ReActSingleInputOutputParserfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain import hubprompt = hub.pull("hwchase17/react")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)llm_with_stop = llm.bind(stop=["\nObservation"])agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps'])} | prompt | llm_with_stop | ReActSingleInputOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"}) > Entering new AgentExecutor chain... I need to
This walkthrough showcases using an agent to implement the ReAct logic.
This walkthrough showcases using an agent to implement the ReAct logic. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesReActOn this pageReActThis walkthrough showcases using an agent to implement the ReAct logic.from langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.llms import OpenAIFirst, let's load the language model we're going to use to control the agent.llm = OpenAI(temperature=0)Next, let's load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.tools = load_tools(["serpapi", "llm-math"], llm=llm)Using LCEL​We will first show how to create the agent using LCELfrom langchain.tools.render import render_text_descriptionfrom langchain.agents.output_parsers import ReActSingleInputOutputParserfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain import hubprompt = hub.pull("hwchase17/react")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)llm_with_stop = llm.bind(stop=["\nObservation"])agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps'])} | prompt | llm_with_stop | ReActSingleInputOutputParser()from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"}) > Entering new AgentExecutor chain... I need to
2,438
Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Leo DiCaprio girlfriend"model Vittoria Ceretti I need to find out Vittoria Ceretti's age Action: Search Action Input: "Vittoria Ceretti age"25 years I need to calculate 25 raised to the 0.43 power Action: Calculator Action Input: 25^0.43Answer: 3.991298452658078 I now know the final answer Final Answer: Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078. > Finished chain. {'input': "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", 'output': "Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078."}Using ZeroShotReactAgent‚ÄãWe will now show how to use the agent with an off-the-shelf agent implementationagent_executor = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"}) > Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: model Vittoria Ceretti Thought: I need to find out Vittoria Ceretti's age Action: Search Action Input: "Vittoria Ceretti age" Observation: 25 years Thought: I need to calculate 25 raised to the 0.43 power Action: Calculator Action Input: 25^0.43 Observation: Answer: 3.991298452658078 Thought: I now know the final answer Final Answer: Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078. > Finished chain. {'input': "Who is Leo DiCaprio's girlfriend? What is her
This walkthrough showcases using an agent to implement the ReAct logic.
This walkthrough showcases using an agent to implement the ReAct logic. ->: Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Leo DiCaprio girlfriend"model Vittoria Ceretti I need to find out Vittoria Ceretti's age Action: Search Action Input: "Vittoria Ceretti age"25 years I need to calculate 25 raised to the 0.43 power Action: Calculator Action Input: 25^0.43Answer: 3.991298452658078 I now know the final answer Final Answer: Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078. > Finished chain. {'input': "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", 'output': "Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078."}Using ZeroShotReactAgent‚ÄãWe will now show how to use the agent with an off-the-shelf agent implementationagent_executor = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"}) > Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: model Vittoria Ceretti Thought: I need to find out Vittoria Ceretti's age Action: Search Action Input: "Vittoria Ceretti age" Observation: 25 years Thought: I need to calculate 25 raised to the 0.43 power Action: Calculator Action Input: 25^0.43 Observation: Answer: 3.991298452658078 Thought: I now know the final answer Final Answer: Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078. > Finished chain. {'input': "Who is Leo DiCaprio's girlfriend? What is her
2,439
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", 'output': "Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078."}Using chat models​You can also create ReAct agents that use chat models instead of LLMs as the agent driver.The main difference here is a different prompt. We will use JSON to encode the agent's actions (chat models are a bit tougher to steet, so using JSON helps to enforce the output format).from langchain.chat_models import ChatOpenAIchat_model = ChatOpenAI(temperature=0)prompt = hub.pull("hwchase17/react-json")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)chat_model_with_stop = chat_model.bind(stop=["\nObservation"])from langchain.agents.output_parsers import ReActJsonSingleInputOutputParseragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps'])} | prompt | chat_model_with_stop | ReActJsonSingleInputOutputParser()agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"})We can also use an off-the-shelf agent classagent = initialize_agent(tools, chat_model, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")PreviousOpenAI Multi Functions AgentNextReAct document storeUsing LCELUsing ZeroShotReactAgentUsing chat modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This walkthrough showcases using an agent to implement the ReAct logic.
This walkthrough showcases using an agent to implement the ReAct logic. ->: "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", 'output': "Leo DiCaprio's girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078."}Using chat models​You can also create ReAct agents that use chat models instead of LLMs as the agent driver.The main difference here is a different prompt. We will use JSON to encode the agent's actions (chat models are a bit tougher to steet, so using JSON helps to enforce the output format).from langchain.chat_models import ChatOpenAIchat_model = ChatOpenAI(temperature=0)prompt = hub.pull("hwchase17/react-json")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)chat_model_with_stop = chat_model.bind(stop=["\nObservation"])from langchain.agents.output_parsers import ReActJsonSingleInputOutputParseragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps'])} | prompt | chat_model_with_stop | ReActJsonSingleInputOutputParser()agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke({"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"})We can also use an off-the-shelf agent classagent = initialize_agent(tools, chat_model, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")PreviousOpenAI Multi Functions AgentNextReAct document storeUsing LCELUsing ZeroShotReactAgentUsing chat modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,440
ReAct document store | 🦜️🔗 Langchain
This walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically.
This walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically. ->: ReAct document store | 🦜️🔗 Langchain
2,441
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesReAct document storeReAct document storeThis walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically.from langchain.llms import OpenAIfrom langchain.docstore import Wikipediafrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.agents.react.base import DocstoreExplorerdocstore = DocstoreExplorer(Wikipedia())tools = [ Tool( name="Search", func=docstore.search, description="useful for when you need to ask with search", ), Tool( name="Lookup", func=docstore.lookup, description="useful for when you need to ask with lookup", ),]llm = OpenAI(temperature=0, model_name="text-davinci-002")react = initialize_agent(tools, llm, agent=AgentType.REACT_DOCSTORE, verbose=True)question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"react.run(question) > Entering new AgentExecutor chain... Thought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find which President the admiral served under. Action: Search[David Chanoff] Observation: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A.
This walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically.
This walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesConversationalOpenAI functionsOpenAI Multi Functions AgentReActReAct document storeSelf-ask with searchStructured tool chatXML AgentHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsAgent TypesReAct document storeReAct document storeThis walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically.from langchain.llms import OpenAIfrom langchain.docstore import Wikipediafrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.agents.react.base import DocstoreExplorerdocstore = DocstoreExplorer(Wikipedia())tools = [ Tool( name="Search", func=docstore.search, description="useful for when you need to ask with search", ), Tool( name="Lookup", func=docstore.lookup, description="useful for when you need to ask with lookup", ),]llm = OpenAI(temperature=0, model_name="text-davinci-002")react = initialize_agent(tools, llm, agent=AgentType.REACT_DOCSTORE, verbose=True)question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"react.run(question) > Entering new AgentExecutor chain... Thought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find which President the admiral served under. Action: Search[David Chanoff] Observation: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A.
2,442
His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books. Thought: The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe. I need to find which President he served under. Action: Search[William J. Crowe] Observation: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton. Thought: William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton, so the answer is Bill Clinton. Action: Finish[Bill Clinton] > Finished chain. 'Bill Clinton'PreviousReActNextSelf-ask with searchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically.
This walkthrough showcases using an agent to implement the ReAct logic for working with document store specifically. ->: His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books. Thought: The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe. I need to find which President he served under. Action: Search[William J. Crowe] Observation: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton. Thought: William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton, so the answer is Bill Clinton. Action: Finish[Bill Clinton] > Finished chain. 'Bill Clinton'PreviousReActNextSelf-ask with searchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,443
Custom multi-action agent | 🦜️🔗 Langchain
This notebook goes through how to create your own custom agent.
This notebook goes through how to create your own custom agent. ->: Custom multi-action agent | 🦜️🔗 Langchain
2,444
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom multi-action agentCustom multi-action agentThis notebook goes through how to create your own custom agent.An agent consists of two parts:Tools: The tools the agent has available to use.The agent class itself: this decides which action to take.In this notebook we walk through how to create a custom agent that predicts/takes multiple steps at a time.from langchain.agents import Tool, AgentExecutor, BaseMultiActionAgentfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperdef random_word(query: str) -> str: print("\nNow I'm doing this!") return "foo"search = SerpAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", ), Tool( name="RandomWord", func=random_word, description="call this to get a random word.", ),]from typing import List, Tuple, Any, Unionfrom langchain.schema import AgentAction, AgentFinishclass FakeAgent(BaseMultiActionAgent): """Fake Custom Agent."""
This notebook goes through how to create your own custom agent.
This notebook goes through how to create your own custom agent. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom multi-action agentCustom multi-action agentThis notebook goes through how to create your own custom agent.An agent consists of two parts:Tools: The tools the agent has available to use.The agent class itself: this decides which action to take.In this notebook we walk through how to create a custom agent that predicts/takes multiple steps at a time.from langchain.agents import Tool, AgentExecutor, BaseMultiActionAgentfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperdef random_word(query: str) -> str: print("\nNow I'm doing this!") return "foo"search = SerpAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", ), Tool( name="RandomWord", func=random_word, description="call this to get a random word.", ),]from typing import List, Tuple, Any, Unionfrom langchain.schema import AgentAction, AgentFinishclass FakeAgent(BaseMultiActionAgent): """Fake Custom Agent."""
2,445
"""Fake Custom Agent.""" @property def input_keys(self): return ["input"] def plan( self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any ) -> Union[List[AgentAction], AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations **kwargs: User inputs. Returns: Action specifying what tool to use. """ if len(intermediate_steps) == 0: return [ AgentAction(tool="Search", tool_input=kwargs["input"], log=""), AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""), ] else: return AgentFinish(return_values={"output": "bar"}, log="") async def aplan( self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any ) -> Union[List[AgentAction], AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations **kwargs: User inputs. Returns: Action specifying what tool to use. """ if len(intermediate_steps) == 0: return [ AgentAction(tool="Search", tool_input=kwargs["input"], log=""), AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""), ] else: return AgentFinish(return_values={"output": "bar"}, log="")agent = FakeAgent()agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run("How many people live in canada as of 2023?") > Entering new AgentExecutor chain... The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data. Now I'm doing this! foo > Finished chain. 'bar'PreviousCustom
This notebook goes through how to create your own custom agent.
This notebook goes through how to create your own custom agent. ->: """Fake Custom Agent.""" @property def input_keys(self): return ["input"] def plan( self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any ) -> Union[List[AgentAction], AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations **kwargs: User inputs. Returns: Action specifying what tool to use. """ if len(intermediate_steps) == 0: return [ AgentAction(tool="Search", tool_input=kwargs["input"], log=""), AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""), ] else: return AgentFinish(return_values={"output": "bar"}, log="") async def aplan( self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any ) -> Union[List[AgentAction], AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations **kwargs: User inputs. Returns: Action specifying what tool to use. """ if len(intermediate_steps) == 0: return [ AgentAction(tool="Search", tool_input=kwargs["input"], log=""), AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""), ] else: return AgentFinish(return_values={"output": "bar"}, log="")agent = FakeAgent()agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run("How many people live in canada as of 2023?") > Entering new AgentExecutor chain... The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data. Now I'm doing this! foo > Finished chain. 'bar'PreviousCustom
2,446
> Finished chain. 'bar'PreviousCustom MRKL agentNextHandle parsing errorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes through how to create your own custom agent.
This notebook goes through how to create your own custom agent. ->: > Finished chain. 'bar'PreviousCustom MRKL agentNextHandle parsing errorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,447
Custom LLM Agent (with a ChatModel) | 🦜️🔗 Langchain
This notebook goes through how to create your own custom agent based on a chat model.
This notebook goes through how to create your own custom agent based on a chat model. ->: Custom LLM Agent (with a ChatModel) | 🦜️🔗 Langchain
2,448
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom LLM Agent (with a ChatModel)On this pageCustom LLM Agent (with a ChatModel)This notebook goes through how to create your own custom agent based on a chat model.An LLM chat agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doChatModel: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLM output into an AgentAction or AgentFinish objectThe LLM Agent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLM Agent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and
This notebook goes through how to create your own custom agent based on a chat model.
This notebook goes through how to create your own custom agent based on a chat model. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom LLM Agent (with a ChatModel)On this pageCustom LLM Agent (with a ChatModel)This notebook goes through how to create your own custom agent based on a chat model.An LLM chat agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doChatModel: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLM output into an AgentAction or AgentFinish objectThe LLM Agent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLM Agent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and
2,449
is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.In this notebook we walk through how to create a custom LLM agent.Set up environment‚ÄãDo necessary imports, etc.pip install langchainpip install google-search-resultspip install openaifrom langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParserfrom langchain.prompts import BaseChatPromptTemplatefrom langchain.utilities import SerpAPIWrapperfrom langchain.chains.llm import LLMChainfrom langchain.chat_models import ChatOpenAIfrom typing import List, Unionfrom langchain.schema import AgentAction, AgentFinish, HumanMessageimport refrom getpass import getpassSet up tools‚ÄãSet up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).SERPAPI_API_KEY = getpass()# Define which tools the agent can use to answer user queriessearch = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events" )]Prompt template‚ÄãThis instructs the agent on what to do. Generally, the template should incorporate:tools: which tools the agent has access and how and when to call them.intermediate_steps: These are tuples of previous (AgentAction, Observation) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.input: generic user input# Set up the base templatetemplate = """Complete the objective as best you can. You have access to the following tools:{tools}Use the following format:Question: the input question you must answerThought: you should
This notebook goes through how to create your own custom agent based on a chat model.
This notebook goes through how to create your own custom agent based on a chat model. ->: is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.In this notebook we walk through how to create a custom LLM agent.Set up environment‚ÄãDo necessary imports, etc.pip install langchainpip install google-search-resultspip install openaifrom langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParserfrom langchain.prompts import BaseChatPromptTemplatefrom langchain.utilities import SerpAPIWrapperfrom langchain.chains.llm import LLMChainfrom langchain.chat_models import ChatOpenAIfrom typing import List, Unionfrom langchain.schema import AgentAction, AgentFinish, HumanMessageimport refrom getpass import getpassSet up tools‚ÄãSet up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).SERPAPI_API_KEY = getpass()# Define which tools the agent can use to answer user queriessearch = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events" )]Prompt template‚ÄãThis instructs the agent on what to do. Generally, the template should incorporate:tools: which tools the agent has access and how and when to call them.intermediate_steps: These are tuples of previous (AgentAction, Observation) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.input: generic user input# Set up the base templatetemplate = """Complete the objective as best you can. You have access to the following tools:{tools}Use the following format:Question: the input question you must answerThought: you should
2,450
input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input questionThese were previous tasks you completed:Begin!Question: {input}{agent_scratchpad}"""# Set up a prompt templateclass CustomPromptTemplate(BaseChatPromptTemplate): # The template to use template: str # The list of tools available tools: List[Tool] def format_messages(self, **kwargs) -> str: # Get the intermediate steps (AgentAction, Observation tuples) # Format them in a particular way intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " # Set the agent_scratchpad variable to that value kwargs["agent_scratchpad"] = thoughts # Create a tools variable from the list of tools provided kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools]) # Create a list of tool names for the tools provided kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) formatted = self.template.format(**kwargs) return [HumanMessage(content=formatted)]prompt = CustomPromptTemplate( template=template, tools=tools, # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically # This includes the `intermediate_steps` variable because that is needed input_variables=["input", "intermediate_steps"])Output parser‚ÄãThe output parser is responsible for parsing the LLM output into AgentAction and AgentFinish. This usually depends heavily on the prompt
This notebook goes through how to create your own custom agent based on a chat model.
This notebook goes through how to create your own custom agent based on a chat model. ->: input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input questionThese were previous tasks you completed:Begin!Question: {input}{agent_scratchpad}"""# Set up a prompt templateclass CustomPromptTemplate(BaseChatPromptTemplate): # The template to use template: str # The list of tools available tools: List[Tool] def format_messages(self, **kwargs) -> str: # Get the intermediate steps (AgentAction, Observation tuples) # Format them in a particular way intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " # Set the agent_scratchpad variable to that value kwargs["agent_scratchpad"] = thoughts # Create a tools variable from the list of tools provided kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools]) # Create a list of tool names for the tools provided kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) formatted = self.template.format(**kwargs) return [HumanMessage(content=formatted)]prompt = CustomPromptTemplate( template=template, tools=tools, # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically # This includes the `intermediate_steps` variable because that is needed input_variables=["input", "intermediate_steps"])Output parser‚ÄãThe output parser is responsible for parsing the LLM output into AgentAction and AgentFinish. This usually depends heavily on the prompt
2,451
This usually depends heavily on the prompt used.This is where you can change the parsing to do retries, handle whitespace, etc.class CustomOutputParser(AgentOutputParser): def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: # Check if agent should finish if "Final Answer:" in llm_output: return AgentFinish( # Return values is generally always a dictionary with a single `output` key # It is not recommended to try anything else at the moment :) return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) # Parse out the action and action input regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)" match = re.search(regex, llm_output, re.DOTALL) if not match: raise ValueError(f"Could not parse LLM output: `{llm_output}`") action = match.group(1).strip() action_input = match.group(2) # Return the action and action input return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)output_parser = CustomOutputParser()Set up LLM‚ÄãChoose the LLM you want to use!OPENAI_API_KEY = getpass()llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)Define the stop sequence‚ÄãThis is important because it tells the LLM when to stop generation.This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an Observation (otherwise, the LLM may hallucinate an observation for you).Set up the Agent‚ÄãWe can now combine everything to set up our agent:# LLM chain consisting of the LLM and a promptllm_chain = LLMChain(llm=llm, prompt=prompt)tool_names = [tool.name for tool in tools]agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tool_names)Use the
This notebook goes through how to create your own custom agent based on a chat model.
This notebook goes through how to create your own custom agent based on a chat model. ->: This usually depends heavily on the prompt used.This is where you can change the parsing to do retries, handle whitespace, etc.class CustomOutputParser(AgentOutputParser): def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: # Check if agent should finish if "Final Answer:" in llm_output: return AgentFinish( # Return values is generally always a dictionary with a single `output` key # It is not recommended to try anything else at the moment :) return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) # Parse out the action and action input regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)" match = re.search(regex, llm_output, re.DOTALL) if not match: raise ValueError(f"Could not parse LLM output: `{llm_output}`") action = match.group(1).strip() action_input = match.group(2) # Return the action and action input return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)output_parser = CustomOutputParser()Set up LLM‚ÄãChoose the LLM you want to use!OPENAI_API_KEY = getpass()llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)Define the stop sequence‚ÄãThis is important because it tells the LLM when to stop generation.This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an Observation (otherwise, the LLM may hallucinate an observation for you).Set up the Agent‚ÄãWe can now combine everything to set up our agent:# LLM chain consisting of the LLM and a promptllm_chain = LLMChain(llm=llm, prompt=prompt)tool_names = [tool.name for tool in tools]agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tool_names)Use the
2,452
allowed_tools=tool_names)Use the Agent​Now we can use it!agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)agent_executor.run("Search for Leo DiCaprio's girlfriend on the internet.") > Entering new AgentExecutor chain... Thought: I should use a reliable search engine to get accurate information. Action: Search Action Input: "Leo DiCaprio girlfriend" Observation:He went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior. I have found the answer to the question. Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone. > Finished chain. "Leo DiCaprio's current girlfriend is Camila Morrone."PreviousCustom LLM agentNextCustom MRKL agentSet up environmentSet up toolsPrompt templateOutput parserSet up LLMDefine the stop sequenceSet up the AgentUse the AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes through how to create your own custom agent based on a chat model.
This notebook goes through how to create your own custom agent based on a chat model. ->: allowed_tools=tool_names)Use the Agent​Now we can use it!agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)agent_executor.run("Search for Leo DiCaprio's girlfriend on the internet.") > Entering new AgentExecutor chain... Thought: I should use a reliable search engine to get accurate information. Action: Search Action Input: "Leo DiCaprio girlfriend" Observation:He went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior. I have found the answer to the question. Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone. > Finished chain. "Leo DiCaprio's current girlfriend is Camila Morrone."PreviousCustom LLM agentNextCustom MRKL agentSet up environmentSet up toolsPrompt templateOutput parserSet up LLMDefine the stop sequenceSet up the AgentUse the AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,453
Custom MRKL agent | 🦜️🔗 Langchain
This notebook goes through how to create your own custom MRKL agent.
This notebook goes through how to create your own custom MRKL agent. ->: Custom MRKL agent | 🦜️🔗 Langchain
2,454
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom MRKL agentOn this pageCustom MRKL agentThis notebook goes through how to create your own custom MRKL agent.A MRKL agent consists of three parts:Tools: The tools the agent has available to use.LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.The agent class itself: this parses the output of the LLMChain to determine which action to take.In this notebook we walk through how to create a custom MRKL agent by creating a custom LLMChain.Custom LLMChain​The first way to create a custom agent is to use an existing Agent class, but use a custom LLMChain. This is the simplest way to create a custom Agent. It is highly recommended that you work with the ZeroShotAgent, as at the moment that is by far the most generalizable one. Most of the work in creating the custom LLMChain comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. Additionally, we currently require an
This notebook goes through how to create your own custom MRKL agent.
This notebook goes through how to create your own custom MRKL agent. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom MRKL agentOn this pageCustom MRKL agentThis notebook goes through how to create your own custom MRKL agent.A MRKL agent consists of three parts:Tools: The tools the agent has available to use.LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.The agent class itself: this parses the output of the LLMChain to determine which action to take.In this notebook we walk through how to create a custom MRKL agent by creating a custom LLMChain.Custom LLMChain​The first way to create a custom agent is to use an existing Agent class, but use a custom LLMChain. This is the simplest way to create a custom Agent. It is highly recommended that you work with the ZeroShotAgent, as at the moment that is by far the most generalizable one. Most of the work in creating the custom LLMChain comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. Additionally, we currently require an
2,455
format. Additionally, we currently require an agent_scratchpad input variable to put notes on previous actions and observations. This should almost always be the final part of the prompt. However, besides those instructions, you can customize the prompt as you wish.To ensure that the prompt contains the appropriate instructions, we will utilize a helper method on that class. The helper method for the ZeroShotAgent takes the following arguments:tools: List of tools the agent will have access to, used to format the prompt.prefix: String to put before the list of tools.suffix: String to put after the list of tools.input_variables: List of input variables the final prompt will expect.For this exercise, we will give our agent access to Google Search, and we will customize it in that we will have it answer as a pirate.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.chains import LLMChainsearch = SerpAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", )]prefix = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:"""suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"])In case we are curious, we can now take a look at the final prompt template to see what it looks like when its all put together.print(prompt.template) Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools: Search: useful for when you need to answer questions about current events Use the following format: Question:
This notebook goes through how to create your own custom MRKL agent.
This notebook goes through how to create your own custom MRKL agent. ->: format. Additionally, we currently require an agent_scratchpad input variable to put notes on previous actions and observations. This should almost always be the final part of the prompt. However, besides those instructions, you can customize the prompt as you wish.To ensure that the prompt contains the appropriate instructions, we will utilize a helper method on that class. The helper method for the ZeroShotAgent takes the following arguments:tools: List of tools the agent will have access to, used to format the prompt.prefix: String to put before the list of tools.suffix: String to put after the list of tools.input_variables: List of input variables the final prompt will expect.For this exercise, we will give our agent access to Google Search, and we will customize it in that we will have it answer as a pirate.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.chains import LLMChainsearch = SerpAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", )]prefix = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:"""suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"])In case we are curious, we can now take a look at the final prompt template to see what it looks like when its all put together.print(prompt.template) Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools: Search: useful for when you need to answer questions about current events Use the following format: Question:
2,456
Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Search] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args" Question: {input} {agent_scratchpad}Note that we are able to feed agents a self-defined prompt template, i.e. not restricted to the prompt generated by the create_prompt function, assuming it meets the agent's requirements. For example, for ZeroShotAgent, we will need to ensure that it meets the following requirements. There should a string starting with "Action:" and a following string starting with "Action Input:", and both should be separated by a newline.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)tool_names = [tool.name for tool in tools]agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run("How many people live in canada as of 2023?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada 2023 Observation: The current population of Canada is 38,661,927 as of Sunday, April 16, 2023, based on Worldometer elaboration of the latest United Nations data. Thought: I now know the final answer Final Answer: Arrr, Canada be havin' 38,661,927 people livin' there as of 2023! > Finished chain. "Arrr, Canada be havin' 38,661,927 people livin' there as of 2023!"Multiple inputs‚ÄãAgents can also work with prompts that require multiple inputs.prefix = """Answer
This notebook goes through how to create your own custom MRKL agent.
This notebook goes through how to create your own custom MRKL agent. ->: Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Search] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args" Question: {input} {agent_scratchpad}Note that we are able to feed agents a self-defined prompt template, i.e. not restricted to the prompt generated by the create_prompt function, assuming it meets the agent's requirements. For example, for ZeroShotAgent, we will need to ensure that it meets the following requirements. There should a string starting with "Action:" and a following string starting with "Action Input:", and both should be separated by a newline.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)tool_names = [tool.name for tool in tools]agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run("How many people live in canada as of 2023?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada 2023 Observation: The current population of Canada is 38,661,927 as of Sunday, April 16, 2023, based on Worldometer elaboration of the latest United Nations data. Thought: I now know the final answer Final Answer: Arrr, Canada be havin' 38,661,927 people livin' there as of 2023! > Finished chain. "Arrr, Canada be havin' 38,661,927 people livin' there as of 2023!"Multiple inputs‚ÄãAgents can also work with prompts that require multiple inputs.prefix = """Answer
2,457
that require multiple inputs.prefix = """Answer the following questions as best you can. You have access to the following tools:"""suffix = """When answering, you MUST speak in the following language: {language}.Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "language", "agent_scratchpad"],)llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools)agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run( input="How many people live in canada as of 2023?", language="italian") > Entering new AgentExecutor chain... Thought: I should look for recent population estimates. Action: Search Action Input: Canada population 2023 Observation: 39,566,248 Thought: I should double check this number. Action: Search Action Input: Canada population estimates 2023 Observation: Canada's population was estimated at 39,566,248 on January 1, 2023, after a record population growth of 1,050,110 people from January 1, 2022, to January 1, 2023. Thought: I now know the final answer. Final Answer: La popolazione del Canada è stata stimata a 39.566.248 il 1° gennaio 2023, dopo un record di crescita demografica di 1.050.110 persone dal 1° gennaio 2022 al 1° gennaio 2023. > Finished chain. 'La popolazione del Canada è stata stimata a 39.566.248 il 1° gennaio 2023, dopo un record di crescita demografica di 1.050.110 persone dal 1° gennaio 2022 al 1° gennaio 2023.'PreviousCustom LLM Agent (with a ChatModel)NextCustom multi-action agentCustom LLMChainMultiple inputsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes through how to create your own custom MRKL agent.
This notebook goes through how to create your own custom MRKL agent. ->: that require multiple inputs.prefix = """Answer the following questions as best you can. You have access to the following tools:"""suffix = """When answering, you MUST speak in the following language: {language}.Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "language", "agent_scratchpad"],)llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools)agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_executor.run( input="How many people live in canada as of 2023?", language="italian") > Entering new AgentExecutor chain... Thought: I should look for recent population estimates. Action: Search Action Input: Canada population 2023 Observation: 39,566,248 Thought: I should double check this number. Action: Search Action Input: Canada population estimates 2023 Observation: Canada's population was estimated at 39,566,248 on January 1, 2023, after a record population growth of 1,050,110 people from January 1, 2022, to January 1, 2023. Thought: I now know the final answer. Final Answer: La popolazione del Canada è stata stimata a 39.566.248 il 1° gennaio 2023, dopo un record di crescita demografica di 1.050.110 persone dal 1° gennaio 2022 al 1° gennaio 2023. > Finished chain. 'La popolazione del Canada è stata stimata a 39.566.248 il 1° gennaio 2023, dopo un record di crescita demografica di 1.050.110 persone dal 1° gennaio 2022 al 1° gennaio 2023.'PreviousCustom LLM Agent (with a ChatModel)NextCustom multi-action agentCustom LLMChainMultiple inputsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,458
Add Memory to OpenAI Functions Agent | 🦜️🔗 Langchain
This notebook goes over how to add memory to an OpenAI Functions agent.
This notebook goes over how to add memory to an OpenAI Functions agent. ->: Add Memory to OpenAI Functions Agent | 🦜️🔗 Langchain
2,459
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toAdd Memory to OpenAI Functions AgentAdd Memory to OpenAI Functions AgentThis notebook goes over how to add memory to an OpenAI Functions agent.from langchain.chains import LLMMathChainfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")search = SerpAPIWrapper()llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions", ), Tool( name="Calculator", func=llm_math_chain.run,
This notebook goes over how to add memory to an OpenAI Functions agent.
This notebook goes over how to add memory to an OpenAI Functions agent. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toAdd Memory to OpenAI Functions AgentAdd Memory to OpenAI Functions AgentThis notebook goes over how to add memory to an OpenAI Functions agent.from langchain.chains import LLMMathChainfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")search = SerpAPIWrapper()llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions", ), Tool( name="Calculator", func=llm_math_chain.run,
2,460
func=llm_math_chain.run, description="useful for when you need to answer questions about math", ), Tool( name="FooBar-DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context", ),]from langchain.prompts import MessagesPlaceholderfrom langchain.memory import ConversationBufferMemoryagent_kwargs = { "extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],}memory = ConversationBufferMemory(memory_key="memory", return_messages=True)agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, agent_kwargs=agent_kwargs, memory=memory,)agent.run("hi") > Entering new chain... Hello! How can I assist you today? > Finished chain. 'Hello! How can I assist you today?'agent.run("my name is bob") > Entering new chain... Nice to meet you, Bob! How can I help you today? > Finished chain. 'Nice to meet you, Bob! How can I help you today?'agent.run("whats my name") > Entering new chain... Your name is Bob. > Finished chain. 'Your name is Bob.'PreviousXML AgentNextRunning Agent as an IteratorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to add memory to an OpenAI Functions agent.
This notebook goes over how to add memory to an OpenAI Functions agent. ->: func=llm_math_chain.run, description="useful for when you need to answer questions about math", ), Tool( name="FooBar-DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context", ),]from langchain.prompts import MessagesPlaceholderfrom langchain.memory import ConversationBufferMemoryagent_kwargs = { "extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],}memory = ConversationBufferMemory(memory_key="memory", return_messages=True)agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, agent_kwargs=agent_kwargs, memory=memory,)agent.run("hi") > Entering new chain... Hello! How can I assist you today? > Finished chain. 'Hello! How can I assist you today?'agent.run("my name is bob") > Entering new chain... Nice to meet you, Bob! How can I help you today? > Finished chain. 'Nice to meet you, Bob! How can I help you today?'agent.run("whats my name") > Entering new chain... Your name is Bob. > Finished chain. 'Your name is Bob.'PreviousXML AgentNextRunning Agent as an IteratorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,461
Access intermediate steps | 🦜️🔗 Langchain
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples.
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples. ->: Access intermediate steps | 🦜️🔗 Langchain
2,462
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toAccess intermediate stepsAccess intermediate stepsIn order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples.from langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.llms import OpenAIInitialize the components needed for the agent.llm = OpenAI(temperature=0, model_name="text-davinci-002")tools = load_tools(["serpapi", "llm-math"], llm=llm)Initialize the agent with return_intermediate_steps=True:agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, return_intermediate_steps=True,)response = agent( { "input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?" }) > Entering new AgentExecutor chain... I should look up who Leo DiCaprio is dating Action: Search Action Input: "Leo DiCaprio
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples.
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toAccess intermediate stepsAccess intermediate stepsIn order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples.from langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.llms import OpenAIInitialize the components needed for the agent.llm = OpenAI(temperature=0, model_name="text-davinci-002")tools = load_tools(["serpapi", "llm-math"], llm=llm)Initialize the agent with return_intermediate_steps=True:agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, return_intermediate_steps=True,)response = agent( { "input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?" }) > Entering new AgentExecutor chain... I should look up who Leo DiCaprio is dating Action: Search Action Input: "Leo DiCaprio
2,463
Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: Camila Morrone Thought: I should look up how old Camila Morrone is Action: Search Action Input: "Camila Morrone age" Observation: 25 years Thought: I should calculate what 25 years raised to the 0.43 power is Action: Calculator Action Input: 25^0.43 Observation: Answer: 3.991298452658078 Thought: I now know the final answer Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and she is 3.991298452658078 years old. > Finished chain.# The actual return type is a NamedTuple for the agent action, and then an observationprint(response["intermediate_steps"]) [(AgentAction(tool='Search', tool_input='Leo DiCaprio girlfriend', log=' I should look up who Leo DiCaprio is dating\nAction: Search\nAction Input: "Leo DiCaprio girlfriend"'), 'Camila Morrone'), (AgentAction(tool='Search', tool_input='Camila Morrone age', log=' I should look up how old Camila Morrone is\nAction: Search\nAction Input: "Camila Morrone age"'), '25 years'), (AgentAction(tool='Calculator', tool_input='25^0.43', log=' I should calculate what 25 years raised to the 0.43 power is\nAction: Calculator\nAction Input: 25^0.43'), 'Answer: 3.991298452658078\n')]from langchain.load.dump import dumpsprint(dumps(response["intermediate_steps"], pretty=True)) [ [ [ "Search", "Leo DiCaprio girlfriend", " I should look up who Leo DiCaprio is dating\nAction: Search\nAction Input: \"Leo DiCaprio girlfriend\"" ], "Camila Morrone" ], [ [ "Search", "Camila Morrone age", " I should look up how old Camila Morrone is\nAction: Search\nAction Input: \"Camila Morrone age\"" ], "25 years" ], [ [ "Calculator", "25^0.43", " I should calculate what 25 years raised to the 0.43 power is\nAction: Calculator\nAction Input: 25^0.43" ], "Answer:
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples.
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples. ->: Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: Camila Morrone Thought: I should look up how old Camila Morrone is Action: Search Action Input: "Camila Morrone age" Observation: 25 years Thought: I should calculate what 25 years raised to the 0.43 power is Action: Calculator Action Input: 25^0.43 Observation: Answer: 3.991298452658078 Thought: I now know the final answer Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and she is 3.991298452658078 years old. > Finished chain.# The actual return type is a NamedTuple for the agent action, and then an observationprint(response["intermediate_steps"]) [(AgentAction(tool='Search', tool_input='Leo DiCaprio girlfriend', log=' I should look up who Leo DiCaprio is dating\nAction: Search\nAction Input: "Leo DiCaprio girlfriend"'), 'Camila Morrone'), (AgentAction(tool='Search', tool_input='Camila Morrone age', log=' I should look up how old Camila Morrone is\nAction: Search\nAction Input: "Camila Morrone age"'), '25 years'), (AgentAction(tool='Calculator', tool_input='25^0.43', log=' I should calculate what 25 years raised to the 0.43 power is\nAction: Calculator\nAction Input: 25^0.43'), 'Answer: 3.991298452658078\n')]from langchain.load.dump import dumpsprint(dumps(response["intermediate_steps"], pretty=True)) [ [ [ "Search", "Leo DiCaprio girlfriend", " I should look up who Leo DiCaprio is dating\nAction: Search\nAction Input: \"Leo DiCaprio girlfriend\"" ], "Camila Morrone" ], [ [ "Search", "Camila Morrone age", " I should look up how old Camila Morrone is\nAction: Search\nAction Input: \"Camila Morrone age\"" ], "25 years" ], [ [ "Calculator", "25^0.43", " I should calculate what 25 years raised to the 0.43 power is\nAction: Calculator\nAction Input: 25^0.43" ], "Answer:
2,464
Input: 25^0.43" ], "Answer: 3.991298452658078\n" ] ]PreviousHandle parsing errorsNextCap the max number of iterationsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples.
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples. ->: Input: 25^0.43" ], "Answer: 3.991298452658078\n" ] ]PreviousHandle parsing errorsNextCap the max number of iterationsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,465
Use ToolKits with OpenAI Functions | 🦜️🔗 Langchain
This notebook shows how to use the OpenAI functions agent with arbitrary toolkits.
This notebook shows how to use the OpenAI functions agent with arbitrary toolkits. ->: Use ToolKits with OpenAI Functions | 🦜️🔗 Langchain
2,466
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toUse ToolKits with OpenAI FunctionsUse ToolKits with OpenAI FunctionsThis notebook shows how to use the OpenAI functions agent with arbitrary toolkits.from langchain.chains import LLMMathChainfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.schema import SystemMessageLoad the toolkit:db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")toolkit = SQLDatabaseToolkit(llm=ChatOpenAI(), db=db)Set a system message specific to that toolkit:agent_kwargs = { "system_message": SystemMessage(content="You are an expert SQL data analyst.")}llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")agent = initialize_agent( toolkit.get_tools(), llm, agent=AgentType.OPENAI_FUNCTIONS,
This notebook shows how to use the OpenAI functions agent with arbitrary toolkits.
This notebook shows how to use the OpenAI functions agent with arbitrary toolkits. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toUse ToolKits with OpenAI FunctionsUse ToolKits with OpenAI FunctionsThis notebook shows how to use the OpenAI functions agent with arbitrary toolkits.from langchain.chains import LLMMathChainfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.schema import SystemMessageLoad the toolkit:db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")toolkit = SQLDatabaseToolkit(llm=ChatOpenAI(), db=db)Set a system message specific to that toolkit:agent_kwargs = { "system_message": SystemMessage(content="You are an expert SQL data analyst.")}llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")agent = initialize_agent( toolkit.get_tools(), llm, agent=AgentType.OPENAI_FUNCTIONS,
2,467
llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, agent_kwargs=agent_kwargs,)agent.run("how many different artists are there?") > Entering new chain... Invoking: `sql_db_query` with `{'query': 'SELECT COUNT(DISTINCT artist_name) AS num_artists FROM artists'}` Error: (sqlite3.OperationalError) no such table: artists [SQL: SELECT COUNT(DISTINCT artist_name) AS num_artists FROM artists] (Background on this error at: https://sqlalche.me/e/20/e3q8) Invoking: `sql_db_list_tables` with `{}` MediaType, Track, Playlist, sales_table, Customer, Genre, PlaylistTrack, Artist, Invoice, Album, InvoiceLine, Employee Invoking: `sql_db_query` with `{'query': 'SELECT COUNT(DISTINCT artist_id) AS num_artists FROM Artist'}` Error: (sqlite3.OperationalError) no such column: artist_id [SQL: SELECT COUNT(DISTINCT artist_id) AS num_artists FROM Artist] (Background on this error at: https://sqlalche.me/e/20/e3q8) Invoking: `sql_db_query` with `{'query': 'SELECT COUNT(DISTINCT Name) AS num_artists FROM Artist'}` [(275,)]There are 275 different artists in the database. > Finished chain. 'There are 275 different artists in the database.'PreviousStreaming final agent outputNextToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use the OpenAI functions agent with arbitrary toolkits.
This notebook shows how to use the OpenAI functions agent with arbitrary toolkits. ->: llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, agent_kwargs=agent_kwargs,)agent.run("how many different artists are there?") > Entering new chain... Invoking: `sql_db_query` with `{'query': 'SELECT COUNT(DISTINCT artist_name) AS num_artists FROM artists'}` Error: (sqlite3.OperationalError) no such table: artists [SQL: SELECT COUNT(DISTINCT artist_name) AS num_artists FROM artists] (Background on this error at: https://sqlalche.me/e/20/e3q8) Invoking: `sql_db_list_tables` with `{}` MediaType, Track, Playlist, sales_table, Customer, Genre, PlaylistTrack, Artist, Invoice, Album, InvoiceLine, Employee Invoking: `sql_db_query` with `{'query': 'SELECT COUNT(DISTINCT artist_id) AS num_artists FROM Artist'}` Error: (sqlite3.OperationalError) no such column: artist_id [SQL: SELECT COUNT(DISTINCT artist_id) AS num_artists FROM Artist] (Background on this error at: https://sqlalche.me/e/20/e3q8) Invoking: `sql_db_query` with `{'query': 'SELECT COUNT(DISTINCT Name) AS num_artists FROM Artist'}` [(275,)]There are 275 different artists in the database. > Finished chain. 'There are 275 different artists in the database.'PreviousStreaming final agent outputNextToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,468
Shared memory across agents and tools | 🦜️🔗 Langchain
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: Shared memory across agents and tools | 🦜️🔗 Langchain
2,469
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toShared memory across agents and toolsShared memory across agents and toolsThis notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:Adding memory to an LLM ChainCustom AgentsWe are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. The summarization tool also needs access to the conversation memory.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.memory import ConversationBufferMemory, ReadOnlySharedMemoryfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.utilities import GoogleSearchAPIWrappertemplate = """This is a conversation between a human and a bot:{chat_history}Write a summary of the conversation for {input}:"""prompt = PromptTemplate(input_variables=["input", "chat_history"], template=template)memory =
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toShared memory across agents and toolsShared memory across agents and toolsThis notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:Adding memory to an LLM ChainCustom AgentsWe are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. The summarization tool also needs access to the conversation memory.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.memory import ConversationBufferMemory, ReadOnlySharedMemoryfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.utilities import GoogleSearchAPIWrappertemplate = """This is a conversation between a human and a bot:{chat_history}Write a summary of the conversation for {input}:"""prompt = PromptTemplate(input_variables=["input", "chat_history"], template=template)memory =
2,470
"chat_history"], template=template)memory = ConversationBufferMemory(memory_key="chat_history")readonlymemory = ReadOnlySharedMemory(memory=memory)summary_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=readonlymemory, # use the read-only memory to prevent the tool from modifying the memory)search = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", ), Tool( name="Summary", func=summary_chain.run, description="useful for when you summarize a conversation. The input to this tool should be a string, representing who will read this summary.", ),]prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)We can now construct the LLMChain, with the Memory object, and then create the agent.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="What is ChatGPT?") > Entering new AgentExecutor chain... Thought: I should research ChatGPT to answer this question. Action: Search Action Input: "ChatGPT" Observation: Nov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: "chat_history"], template=template)memory = ConversationBufferMemory(memory_key="chat_history")readonlymemory = ReadOnlySharedMemory(memory=memory)summary_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=readonlymemory, # use the read-only memory to prevent the tool from modifying the memory)search = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", ), Tool( name="Summary", func=summary_chain.run, description="useful for when you summarize a conversation. The input to this tool should be a string, representing who will read this summary.", ),]prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)We can now construct the LLMChain, with the Memory object, and then create the agent.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="What is ChatGPT?") > Entering new AgentExecutor chain... Thought: I should research ChatGPT to answer this question. Action: Search Action Input: "ChatGPT" Observation: Nov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT
2,471
ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ... Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. > Finished chain. "ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting."To test the memory of this agent, we can ask a followup question that relies on information in the previous
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ... Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. > Finished chain. "ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting."To test the memory of this agent, we can ask a followup question that relies on information in the previous
2,472
that relies on information in the previous exchange to be answered correctly.agent_chain.run(input="Who developed it?") > Entering new AgentExecutor chain... Thought: I need to find out who developed ChatGPT Action: Search Action Input: Who developed ChatGPT Observation: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ... Thought: I now know the final answer Final Answer: ChatGPT was developed by OpenAI. > Finished chain.
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: that relies on information in the previous exchange to be answered correctly.agent_chain.run(input="Who developed it?") > Entering new AgentExecutor chain... Thought: I need to find out who developed ChatGPT Action: Search Action Input: Who developed ChatGPT Observation: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ... Thought: I now know the final answer Final Answer: ChatGPT was developed by OpenAI. > Finished chain.
2,473
developed by OpenAI. > Finished chain. 'ChatGPT was developed by OpenAI.'agent_chain.run( input="Thanks. Summarize the conversation, for my daughter 5 years old.") > Entering new AgentExecutor chain... Thought: I need to simplify the conversation for a 5 year old. Action: Summary Action Input: My daughter 5 years old > Entering new LLMChain chain... Prompt after formatting: This is a conversation between a human and a bot: Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Write a summary of the conversation for My daughter 5 years old: > Finished chain. Observation: The conversation was about ChatGPT, an artificial intelligence chatbot. It was created by OpenAI and can send and receive images while chatting. Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting. > Finished chain. 'ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.'Confirm that the memory was correctly updated.print(agent_chain.memory.buffer) Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Human:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: developed by OpenAI. > Finished chain. 'ChatGPT was developed by OpenAI.'agent_chain.run( input="Thanks. Summarize the conversation, for my daughter 5 years old.") > Entering new AgentExecutor chain... Thought: I need to simplify the conversation for a 5 year old. Action: Summary Action Input: My daughter 5 years old > Entering new LLMChain chain... Prompt after formatting: This is a conversation between a human and a bot: Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Write a summary of the conversation for My daughter 5 years old: > Finished chain. Observation: The conversation was about ChatGPT, an artificial intelligence chatbot. It was created by OpenAI and can send and receive images while chatting. Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting. > Finished chain. 'ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.'Confirm that the memory was correctly updated.print(agent_chain.memory.buffer) Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Human:
2,474
AI: ChatGPT was developed by OpenAI. Human: Thanks. Summarize the conversation, for my daughter 5 years old. AI: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.For comparison, below is a bad example that uses the same memory for both the Agent and the tool.## This is a bad practice for using the memory.## Use the ReadOnlySharedMemory class, as shown above.template = """This is a conversation between a human and a bot:{chat_history}Write a summary of the conversation for {input}:"""prompt = PromptTemplate(input_variables=["input", "chat_history"], template=template)memory = ConversationBufferMemory(memory_key="chat_history")summary_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory, # <--- this is the only change)search = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", ), Tool( name="Summary", func=summary_chain.run, description="useful for when you summarize a conversation. The input to this tool should be a string, representing who will read this summary.", ),]prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="What is ChatGPT?") > Entering new AgentExecutor chain... Thought: I should research ChatGPT to answer this question.
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: AI: ChatGPT was developed by OpenAI. Human: Thanks. Summarize the conversation, for my daughter 5 years old. AI: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.For comparison, below is a bad example that uses the same memory for both the Agent and the tool.## This is a bad practice for using the memory.## Use the ReadOnlySharedMemory class, as shown above.template = """This is a conversation between a human and a bot:{chat_history}Write a summary of the conversation for {input}:"""prompt = PromptTemplate(input_variables=["input", "chat_history"], template=template)memory = ConversationBufferMemory(memory_key="chat_history")summary_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory, # <--- this is the only change)search = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", ), Tool( name="Summary", func=summary_chain.run, description="useful for when you summarize a conversation. The input to this tool should be a string, representing who will read this summary.", ),]prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="What is ChatGPT?") > Entering new AgentExecutor chain... Thought: I should research ChatGPT to answer this question.
2,475
research ChatGPT to answer this question. Action: Search Action Input: "ChatGPT" Observation: Nov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ... Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. > Finished chain. "ChatGPT is
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: research ChatGPT to answer this question. Action: Search Action Input: "ChatGPT" Observation: Nov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ... Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. > Finished chain. "ChatGPT is
2,476
chatting. > Finished chain. "ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting."agent_chain.run(input="Who developed it?") > Entering new AgentExecutor chain... Thought: I need to find out who developed ChatGPT Action: Search Action Input: Who developed ChatGPT Observation: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: chatting. > Finished chain. "ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting."agent_chain.run(input="Who developed it?") > Entering new AgentExecutor chain... Thought: I need to find out who developed ChatGPT Action: Search Action Input: Who developed ChatGPT Observation: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers
2,477
consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ... Thought: I now know the final answer Final Answer: ChatGPT was developed by OpenAI. > Finished chain. 'ChatGPT was developed by OpenAI.'agent_chain.run( input="Thanks. Summarize the conversation, for my daughter 5 years old.") > Entering new AgentExecutor chain... Thought: I need to simplify the conversation for a 5 year old. Action: Summary Action Input: My daughter 5 years old > Entering new LLMChain chain... Prompt after formatting: This is a conversation between a human and a bot: Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Write a summary of the conversation for My daughter 5 years old: > Finished chain. Observation: The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images. Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images. > Finished chain. 'ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.'The final answer is not wrong, but we see the 3rd Human input is actually from the agent in the memory because the memory was modified by the summary
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ... Thought: I now know the final answer Final Answer: ChatGPT was developed by OpenAI. > Finished chain. 'ChatGPT was developed by OpenAI.'agent_chain.run( input="Thanks. Summarize the conversation, for my daughter 5 years old.") > Entering new AgentExecutor chain... Thought: I need to simplify the conversation for a 5 year old. Action: Summary Action Input: My daughter 5 years old > Entering new LLMChain chain... Prompt after formatting: This is a conversation between a human and a bot: Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Write a summary of the conversation for My daughter 5 years old: > Finished chain. Observation: The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images. Thought: I now know the final answer. Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images. > Finished chain. 'ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.'The final answer is not wrong, but we see the 3rd Human input is actually from the agent in the memory because the memory was modified by the summary
2,478
because the memory was modified by the summary tool.print(agent_chain.memory.buffer) Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Human: My daughter 5 years old AI: The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images. Human: Thanks. Summarize the conversation, for my daughter 5 years old. AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.PreviousReplicating MRKLNextStreaming final agent outputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to both an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them: ->: because the memory was modified by the summary tool.print(agent_chain.memory.buffer) Human: What is ChatGPT? AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting. Human: Who developed it? AI: ChatGPT was developed by OpenAI. Human: My daughter 5 years old AI: The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images. Human: Thanks. Summarize the conversation, for my daughter 5 years old. AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.PreviousReplicating MRKLNextStreaming final agent outputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,479
Returning Structured Output | 🦜️🔗 Langchain
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: Returning Structured Output | 🦜️🔗 Langchain
2,480
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toReturning Structured OutputOn this pageReturning Structured OutputThis notebook covers how to have an agent return a structured output. By default, most of the agents return a single string. It can often be useful to have an agent return something with more structure.A good example of this is an agent tasked with doing question-answering over some sources. Let's say we want the agent to respond not only with the answer, but also a list of the sources used.
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toReturning Structured OutputOn this pageReturning Structured OutputThis notebook covers how to have an agent return a structured output. By default, most of the agents return a single string. It can often be useful to have an agent return something with more structure.A good example of this is an agent tasked with doing question-answering over some sources. Let's say we want the agent to respond not only with the answer, but also a list of the sources used.
2,481
We then want our output to roughly follow the schema below:class Response(BaseModel): """Final response to the question being asked""" answer: str = Field(description = "The final answer to respond to the user") sources: List[int] = Field(description="List of page chunks that contain answer to the question. Only include a page chunk if it contains relevant information")In this notebook we will go over an agent that has a retriever tool and responds in the correct format.Create the Retriever‚ÄãIn this section we will do some setup work to create our retriever over some mock data containing the "State of the Union" address. Importantly, we will add a "page_chunk" tag to the metadata of each document. This is just some fake data intended to simulate a source field. In practice, this would more likely be the URL or path of a document.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.document_loaders import TextLoader# Load in document to retrieve overloader = TextLoader('../../state_of_the_union.txt')documents = loader.load()# Split document into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)# Here is where we add in the fake source informationfor i, doc in enumerate(texts): doc.metadata['page_chunk'] = i# Create our retrieverembeddings = OpenAIEmbeddings()vectorstore = Chroma.from_documents(texts, embeddings, collection_name="state-of-union")retriever = vectorstore.as_retriever()Create the tools‚ÄãWe will now create the tools we want to give to the agent. In this case, it is just one - a tool that wraps our retriever.from langchain.agents.agent_toolkits.conversational_retrieval.tool import create_retriever_toolretriever_tool = create_retriever_tool( retriever, "state-of-union-retriever", "Query a retriever to get information about
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: We then want our output to roughly follow the schema below:class Response(BaseModel): """Final response to the question being asked""" answer: str = Field(description = "The final answer to respond to the user") sources: List[int] = Field(description="List of page chunks that contain answer to the question. Only include a page chunk if it contains relevant information")In this notebook we will go over an agent that has a retriever tool and responds in the correct format.Create the Retriever‚ÄãIn this section we will do some setup work to create our retriever over some mock data containing the "State of the Union" address. Importantly, we will add a "page_chunk" tag to the metadata of each document. This is just some fake data intended to simulate a source field. In practice, this would more likely be the URL or path of a document.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.document_loaders import TextLoader# Load in document to retrieve overloader = TextLoader('../../state_of_the_union.txt')documents = loader.load()# Split document into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)# Here is where we add in the fake source informationfor i, doc in enumerate(texts): doc.metadata['page_chunk'] = i# Create our retrieverembeddings = OpenAIEmbeddings()vectorstore = Chroma.from_documents(texts, embeddings, collection_name="state-of-union")retriever = vectorstore.as_retriever()Create the tools‚ÄãWe will now create the tools we want to give to the agent. In this case, it is just one - a tool that wraps our retriever.from langchain.agents.agent_toolkits.conversational_retrieval.tool import create_retriever_toolretriever_tool = create_retriever_tool( retriever, "state-of-union-retriever", "Query a retriever to get information about
2,482
"Query a retriever to get information about state of the union address")Create response schema‚ÄãHere is where we will define the response schema. In this case, we want the final answer to have two fields: one for the answer, and then another that is a list of sourcesfrom pydantic import BaseModel, Fieldfrom typing import Listfrom langchain.utils.openai_functions import convert_pydantic_to_openai_functionclass Response(BaseModel): """Final response to the question being asked""" answer: str = Field(description = "The final answer to respond to the user") sources: List[int] = Field(description="List of page chunks that contain answer to the question. Only include a page chunk if it contains relevant information")Create the custom parsing logic‚ÄãWe now create some custom parsing logic.
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: "Query a retriever to get information about state of the union address")Create response schema‚ÄãHere is where we will define the response schema. In this case, we want the final answer to have two fields: one for the answer, and then another that is a list of sourcesfrom pydantic import BaseModel, Fieldfrom typing import Listfrom langchain.utils.openai_functions import convert_pydantic_to_openai_functionclass Response(BaseModel): """Final response to the question being asked""" answer: str = Field(description = "The final answer to respond to the user") sources: List[int] = Field(description="List of page chunks that contain answer to the question. Only include a page chunk if it contains relevant information")Create the custom parsing logic‚ÄãWe now create some custom parsing logic.
2,483
How this works is that we will pass the Response schema to the OpenAI LLM via their functions parameter. This is similar to how we pass tools for the agent to use.When the Response function is called by OpenAI, we want to use that as a signal to return to the user.
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: How this works is that we will pass the Response schema to the OpenAI LLM via their functions parameter. This is similar to how we pass tools for the agent to use.When the Response function is called by OpenAI, we want to use that as a signal to return to the user.
2,484
When any other function is called by OpenAI, we treat that as a tool invocation.Therefore, our parsing logic has the following blocks:If no function is called, assume that we should use the response to respond to the user, and therefore return AgentFinishIf the Response function is called, respond to the user with the inputs to that function (our structured output), and therefore return AgentFinishIf any other function is called, treat that as a tool invocation, and therefore return AgentActionMessageLogNote that we are using AgentActionMessageLog rather than AgentAction because it lets us attach a log of messages that we can use in the future to pass back into the agent prompt.from langchain.schema.agent import AgentActionMessageLog, AgentFinishimport jsondef parse(output): # If no function was invoked, return to user if "function_call" not in output.additional_kwargs: return AgentFinish(return_values={"output": output.content}, log=output.content) # Parse out the function call function_call = output.additional_kwargs["function_call"] name = function_call['name'] inputs = json.loads(function_call['arguments']) # If the Response function was invoked, return to the user with the function inputs if name == "Response": return AgentFinish(return_values=inputs, log=str(function_call)) # Otherwise, return an agent action else: return AgentActionMessageLog(tool=name, tool_input=inputs, log="", message_log=[output])Create the Agent‚ÄãWe can now put this all together! The components of this agent are:prompt: a simple prompt with placeholders for the user's question and then the agent_scratchpad (any intermediate steps)tools: we can attach the tools and Response format to the LLM as functionsformat scratchpad: in order to format the agent_scratchpad from intermediate steps, we will use the standard format_to_openai_functions. This takes intermediate steps and formats them as AIMessages and FunctionMessages.output
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: When any other function is called by OpenAI, we treat that as a tool invocation.Therefore, our parsing logic has the following blocks:If no function is called, assume that we should use the response to respond to the user, and therefore return AgentFinishIf the Response function is called, respond to the user with the inputs to that function (our structured output), and therefore return AgentFinishIf any other function is called, treat that as a tool invocation, and therefore return AgentActionMessageLogNote that we are using AgentActionMessageLog rather than AgentAction because it lets us attach a log of messages that we can use in the future to pass back into the agent prompt.from langchain.schema.agent import AgentActionMessageLog, AgentFinishimport jsondef parse(output): # If no function was invoked, return to user if "function_call" not in output.additional_kwargs: return AgentFinish(return_values={"output": output.content}, log=output.content) # Parse out the function call function_call = output.additional_kwargs["function_call"] name = function_call['name'] inputs = json.loads(function_call['arguments']) # If the Response function was invoked, return to the user with the function inputs if name == "Response": return AgentFinish(return_values=inputs, log=str(function_call)) # Otherwise, return an agent action else: return AgentActionMessageLog(tool=name, tool_input=inputs, log="", message_log=[output])Create the Agent‚ÄãWe can now put this all together! The components of this agent are:prompt: a simple prompt with placeholders for the user's question and then the agent_scratchpad (any intermediate steps)tools: we can attach the tools and Response format to the LLM as functionsformat scratchpad: in order to format the agent_scratchpad from intermediate steps, we will use the standard format_to_openai_functions. This takes intermediate steps and formats them as AIMessages and FunctionMessages.output
2,485
them as AIMessages and FunctionMessages.output parser: we will use our custom parser above to parse the response of the LLMAgentExecutor: we will use the standard AgentExecutor to run the loop of agent-tool-agent-tool...from langchain.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain.chat_models import ChatOpenAIfrom langchain.tools.render import format_tool_to_openai_functionfrom langchain.agents.format_scratchpad import format_to_openai_functionsfrom langchain.agents import AgentExecutorprompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])llm = ChatOpenAI(temperature=0)llm_with_tools = llm.bind( functions=[ # The retriever tool format_tool_to_openai_function(retriever_tool), # Response schema convert_pydantic_to_openai_function(Response) ])agent = { "input": lambda x: x["input"], # Format agent scratchpad from intermediate steps "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps'])} | prompt | llm_with_tools | parseagent_executor = AgentExecutor(tools=[retriever_tool], agent=agent, verbose=True)Run the agent​We can now run the agent! Notice how it responds with a dictionary with two keys: answer and sourcesagent_executor.invoke({"input": "what did the president say about kentaji brown jackson"}, return_only_outputs=True) > Entering new AgentExecutor chain... [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: them as AIMessages and FunctionMessages.output parser: we will use our custom parser above to parse the response of the LLMAgentExecutor: we will use the standard AgentExecutor to run the loop of agent-tool-agent-tool...from langchain.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain.chat_models import ChatOpenAIfrom langchain.tools.render import format_tool_to_openai_functionfrom langchain.agents.format_scratchpad import format_to_openai_functionsfrom langchain.agents import AgentExecutorprompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])llm = ChatOpenAI(temperature=0)llm_with_tools = llm.bind( functions=[ # The retriever tool format_tool_to_openai_function(retriever_tool), # Response schema convert_pydantic_to_openai_function(Response) ])agent = { "input": lambda x: x["input"], # Format agent scratchpad from intermediate steps "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps'])} | prompt | llm_with_tools | parseagent_executor = AgentExecutor(tools=[retriever_tool], agent=agent, verbose=True)Run the agent​We can now run the agent! Notice how it responds with a dictionary with two keys: answer and sourcesagent_executor.invoke({"input": "what did the president say about kentaji brown jackson"}, return_only_outputs=True) > Entering new AgentExecutor chain... [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
2,486
Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'page_chunk': 31, 'source': '../../state_of_the_union.txt'}), Document(page_content='One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n\nWhen they came home, many of the world’s fittest and best trained warriors were never the same. \n\nHeadaches. Numbness. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. \n\nI know. \n\nOne of those soldiers was my son Major Beau Biden. \n\nWe don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n\nBut I’m committed to finding out everything we can. \n\nCommitted to military families like Danielle Robinson from Ohio. \n\nThe widow of Sergeant First Class Heath Robinson. \n\nHe was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n\nStationed near Baghdad, just yards from burn pits the size of football fields. \n\nHeath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter.', metadata={'page_chunk': 37, 'source': '../../state_of_the_union.txt'}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'page_chunk': 31, 'source': '../../state_of_the_union.txt'}), Document(page_content='One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n\nWhen they came home, many of the world’s fittest and best trained warriors were never the same. \n\nHeadaches. Numbness. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. \n\nI know. \n\nOne of those soldiers was my son Major Beau Biden. \n\nWe don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n\nBut I’m committed to finding out everything we can. \n\nCommitted to military families like Danielle Robinson from Ohio. \n\nThe widow of Sergeant First Class Heath Robinson. \n\nHe was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n\nStationed near Baghdad, just yards from burn pits the size of football fields. \n\nHeath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter.', metadata={'page_chunk': 37, 'source': '../../state_of_the_union.txt'}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we
2,487
if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'page_chunk': 32, 'source': '../../state_of_the_union.txt'}), Document(page_content='But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body. \n\nDanielle says Heath was a fighter to the very end. \n\nHe didn’t know how to stop fighting, and neither did she. \n\nThrough her pain she found purpose to demand we do better. \n\nTonight, Danielle—we are. \n\nThe VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n\nAnd tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. \n\nI’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n\nAnd fourth, let’s end cancer as we know it. \n\nThis is personal to me and Jill, to Kamala, and to so many of you. \n\nCancer is the #2 cause of death in America–second only to heart disease.', metadata={'page_chunk': 38, 'source': '../../state_of_the_union.txt'})]{'name': 'Response', 'arguments': '{\n "answer": "President mentioned Ketanji Brown Jackson as a nominee for the United States Supreme Court and praised her as one of the nation\'s top legal minds.",\n "sources": [31]\n}'} > Finished chain. {'answer': "President mentioned Ketanji Brown
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'page_chunk': 32, 'source': '../../state_of_the_union.txt'}), Document(page_content='But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body. \n\nDanielle says Heath was a fighter to the very end. \n\nHe didn’t know how to stop fighting, and neither did she. \n\nThrough her pain she found purpose to demand we do better. \n\nTonight, Danielle—we are. \n\nThe VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n\nAnd tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. \n\nI’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n\nAnd fourth, let’s end cancer as we know it. \n\nThis is personal to me and Jill, to Kamala, and to so many of you. \n\nCancer is the #2 cause of death in America–second only to heart disease.', metadata={'page_chunk': 38, 'source': '../../state_of_the_union.txt'})]{'name': 'Response', 'arguments': '{\n "answer": "President mentioned Ketanji Brown Jackson as a nominee for the United States Supreme Court and praised her as one of the nation\'s top legal minds.",\n "sources": [31]\n}'} > Finished chain. {'answer': "President mentioned Ketanji Brown
2,488
{'answer': "President mentioned Ketanji Brown Jackson as a nominee for the United States Supreme Court and praised her as one of the nation's top legal minds.", 'sources': [31]}PreviousRunning Agent as an IteratorNextCombine agents and vector storesCreate the RetrieverCreate the toolsCreate response schemaCreate the custom parsing logicCreate the AgentRun the agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to have an agent return a structured output.
This notebook covers how to have an agent return a structured output. ->: {'answer': "President mentioned Ketanji Brown Jackson as a nominee for the United States Supreme Court and praised her as one of the nation's top legal minds.", 'sources': [31]}PreviousRunning Agent as an IteratorNextCombine agents and vector storesCreate the RetrieverCreate the toolsCreate response schemaCreate the custom parsing logicCreate the AgentRun the agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,489
Create ChatGPT clone | 🦜️🔗 Langchain
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: Create ChatGPT clone | 🦜️🔗 Langchain
2,490
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCreate ChatGPT cloneCreate ChatGPT cloneThis chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.Shows off the example as in https://www.engraved.blog/building-a-virtual-machine-inside/from langchain.llms import OpenAIfrom langchain.chains import ConversationChain, LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.memory import ConversationBufferWindowMemorytemplate = """Assistant is a large language model trained by OpenAI.Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCreate ChatGPT cloneCreate ChatGPT cloneThis chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.Shows off the example as in https://www.engraved.blog/building-a-virtual-machine-inside/from langchain.llms import OpenAIfrom langchain.chains import ConversationChain, LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.memory import ConversationBufferWindowMemorytemplate = """Assistant is a large language model trained by OpenAI.Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can
2,491
and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.{history}Human: {human_input}Assistant:"""prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)chatgpt_chain = LLMChain( llm=OpenAI(temperature=0), prompt=prompt, verbose=True, memory=ConversationBufferWindowMemory(k=2),)output = chatgpt_chain.predict( human_input="I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.{history}Human: {human_input}Assistant:"""prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)chatgpt_chain = LLMChain( llm=OpenAI(temperature=0), prompt=prompt, verbose=True, memory=ConversationBufferWindowMemory(k=2),)output = chatgpt_chain.predict( human_input="I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its
2,492
is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd. Assistant: > Finished chain. ``` /home/user ```output = chatgpt_chain.predict(human_input="ls ~")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd. Assistant: > Finished chain. ``` /home/user ```output = chatgpt_chain.predict(human_input="ls ~")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and
2,493
constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd. AI: ``` $ pwd / ``` Human: ls ~ Assistant: > Finished LLMChain chain. ``` $ ls ~ Desktop Documents Downloads Music Pictures Public Templates Videos ```output = chatgpt_chain.predict(human_input="cd ~")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd. AI: ``` $ pwd / ``` Human: ls ~ Assistant: > Finished LLMChain chain. ``` $ ls ~ Desktop Documents Downloads Music Pictures Public Templates Videos ```output = chatgpt_chain.predict(human_input="cd ~")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its
2,494
is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd. AI: ``` $ pwd / ``` Human: ls ~ AI: ``` $ ls ~ Desktop Documents Downloads Music Pictures Public Templates Videos ``` Human: cd ~ Assistant: > Finished LLMChain chain. ``` $ cd ~ $ pwd /home/user ```output = chatgpt_chain.predict( human_input="{Please make a file jokes.txt inside and put some jokes inside}")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives,
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd. AI: ``` $ pwd / ``` Human: ls ~ AI: ``` $ ls ~ Desktop Documents Downloads Music Pictures Public Templates Videos ``` Human: cd ~ Assistant: > Finished LLMChain chain. ``` $ cd ~ $ pwd /home/user ```output = chatgpt_chain.predict( human_input="{Please make a file jokes.txt inside and put some jokes inside}")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives,
2,495
human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: ls ~ AI: ``` $ ls ~ Desktop Documents Downloads Music Pictures Public Templates Videos ``` Human: cd ~ AI: ``` $ cd ~ $ pwd /home/user ``` Human: {Please make a file jokes.txt inside and put some jokes inside} Assistant: > Finished LLMChain chain. ``` $ touch jokes.txt $ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt $ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt $ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt ```output = chatgpt_chain.predict( human_input="""echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py""")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: ls ~ AI: ``` $ ls ~ Desktop Documents Downloads Music Pictures Public Templates Videos ``` Human: cd ~ AI: ``` $ cd ~ $ pwd /home/user ``` Human: {Please make a file jokes.txt inside and put some jokes inside} Assistant: > Finished LLMChain chain. ``` $ touch jokes.txt $ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt $ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt $ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt ```output = chatgpt_chain.predict( human_input="""echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py""")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and
2,496
questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: cd ~ AI: ``` $ cd ~ $ pwd /home/user ``` Human: {Please make a file jokes.txt inside and put some jokes inside} AI: ``` $ touch jokes.txt $ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt $ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt $ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt ``` Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py Assistant: > Finished LLMChain chain. ``` $ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py $ python3 run.py Result: 33 ```output = chatgpt_chain.predict( human_input="""echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py""")print(output) >
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: cd ~ AI: ``` $ cd ~ $ pwd /home/user ``` Human: {Please make a file jokes.txt inside and put some jokes inside} AI: ``` $ touch jokes.txt $ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt $ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt $ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt ``` Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py Assistant: > Finished LLMChain chain. ``` $ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py $ python3 run.py Result: 33 ```output = chatgpt_chain.predict( human_input="""echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py""")print(output) >
2,497
> run.py && python3 run.py""")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: {Please make a file jokes.txt inside and put some jokes inside} AI: ``` $ touch jokes.txt $ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt $ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt $ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt ``` Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py AI: ``` $ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py $ python3 run.py Result: 33 ``` Human: echo -e
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: > run.py && python3 run.py""")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Human: {Please make a file jokes.txt inside and put some jokes inside} AI: ``` $ touch jokes.txt $ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt $ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt $ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt ``` Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py AI: ``` $ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py $ python3 run.py Result: 33 ``` Human: echo -e
2,498
run.py Result: 33 ``` Human: echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py Assistant: > Finished LLMChain chain. ``` $ echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py $ python3 run.py [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] ```docker_input = """echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04\nCOPY entrypoint.sh entrypoint.sh\nENTRYPOINT [\"/bin/sh\",\"entrypoint.sh\"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image"""output = chatgpt_chain.predict(human_input=docker_input)print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic,
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: run.py Result: 33 ``` Human: echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py Assistant: > Finished LLMChain chain. ``` $ echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py $ python3 run.py [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] ```docker_input = """echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04\nCOPY entrypoint.sh entrypoint.sh\nENTRYPOINT [\"/bin/sh\",\"entrypoint.sh\"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image"""output = chatgpt_chain.predict(human_input=docker_input)print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic,
2,499
to have a conversation about a particular topic, Assistant is here to assist. Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py AI: ``` $ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py $ python3 run.py Result: 33 ``` Human: echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py AI: ``` $ echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py $ python3 run.py [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] ``` Human: echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04 COPY entrypoint.sh entrypoint.sh ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image Assistant: > Finished LLMChain chain. ``` $ echo -e "echo 'Hello from Docker" > entrypoint.sh $ echo -e "FROM ubuntu:20.04 COPY entrypoint.sh entrypoint.sh ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile $ docker build . -t my_docker_image $ docker run -t my_docker_image Hello from Docker ```output = chatgpt_chain.predict(human_input="nvidia-smi")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory. ->: to have a conversation about a particular topic, Assistant is here to assist. Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py AI: ``` $ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py $ python3 run.py Result: 33 ``` Human: echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py AI: ``` $ echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py $ python3 run.py [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] ``` Human: echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04 COPY entrypoint.sh entrypoint.sh ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image Assistant: > Finished LLMChain chain. ``` $ echo -e "echo 'Hello from Docker" > entrypoint.sh $ echo -e "FROM ubuntu:20.04 COPY entrypoint.sh entrypoint.sh ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile $ docker build . -t my_docker_image $ docker run -t my_docker_image Hello from Docker ```output = chatgpt_chain.predict(human_input="nvidia-smi")print(output) > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can