Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
2,600 | model_version="0.1", verbose=False)urls = ["https://lilianweng.github.io/posts/2023-06-23-agent/", "https://medium.com/lyft-engineering/lyftlearn-ml-model-training-infrastructure-built-on-kubernetes-aef8218842bb", "https://blog.langchain.dev/week-of-10-2-langchain-release-notes/"]for url in urls: loader = WebBaseLoader(url) docs = loader.load() llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k", callbacks=[handler]) chain = load_summarize_chain(llm, chain_type="stuff", verbose=False) chain.run(docs)Create Metric Charts​response = client.search_ts("__name__", "latency", 0, int(time.time()))plot(response.text, "Latency")response = client.search_ts("__name__", "error", 0, int(time.time()))plot(response.text, "Errors")response = client.search_ts("__name__", "prompt_tokens", 0, int(time.time()))plot(response.text, "Prompt Tokens")response = client.search_ts("__name__", "completion_tokens", 0, int(time.time()))plot(response.text, "Completion Tokens")     ## Full text query on prompt or prompt outputs# Search for a particular prompt text.query = "machine learning"response = client.search_log(query, 0, int(time.time()))# The output can be verbose - uncomment below if it needs to be printed.# print("Results for", query, ":", response.text)print("===") ===## Stop Infino serverdocker rm -f infino-example infino-examplePreviousContextNextLabel StudioInitializingStart Infino server, initialize the Infino clientRead the questions datasetExample 1: LangChain OpenAI Q&A; Publish metrics and logs to InfinoCreate Metric ChartsFull text query on prompt or prompt outputs.Create Metric ChartsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This example shows how one can track the following while calling OpenAI and ChatOpenAI models via LangChain and Infino: | This example shows how one can track the following while calling OpenAI and ChatOpenAI models via LangChain and Infino: ->: model_version="0.1", verbose=False)urls = ["https://lilianweng.github.io/posts/2023-06-23-agent/", "https://medium.com/lyft-engineering/lyftlearn-ml-model-training-infrastructure-built-on-kubernetes-aef8218842bb", "https://blog.langchain.dev/week-of-10-2-langchain-release-notes/"]for url in urls: loader = WebBaseLoader(url) docs = loader.load() llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k", callbacks=[handler]) chain = load_summarize_chain(llm, chain_type="stuff", verbose=False) chain.run(docs)Create Metric Charts​response = client.search_ts("__name__", "latency", 0, int(time.time()))plot(response.text, "Latency")response = client.search_ts("__name__", "error", 0, int(time.time()))plot(response.text, "Errors")response = client.search_ts("__name__", "prompt_tokens", 0, int(time.time()))plot(response.text, "Prompt Tokens")response = client.search_ts("__name__", "completion_tokens", 0, int(time.time()))plot(response.text, "Completion Tokens")     ## Full text query on prompt or prompt outputs# Search for a particular prompt text.query = "machine learning"response = client.search_log(query, 0, int(time.time()))# The output can be verbose - uncomment below if it needs to be printed.# print("Results for", query, ":", response.text)print("===") ===## Stop Infino serverdocker rm -f infino-example infino-examplePreviousContextNextLabel StudioInitializingStart Infino server, initialize the Infino clientRead the questions datasetExample 1: LangChain OpenAI Q&A; Publish metrics and logs to InfinoCreate Metric ChartsFull text query on prompt or prompt outputs.Create Metric ChartsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,601 | LLMonitor | ü¶úÔ∏èüîó Langchain | LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. | LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. ->: LLMonitor | ü¶úÔ∏èüîó Langchain |
2,602 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksLLMonitorOn this pageLLMonitorLLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.Setup‚ÄãCreate an account on llmonitor.com, then copy your new app's tracking id.Once you have it, set it as an environment variable by running:export LLMONITOR_APP_ID="..."If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:from langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler(app_id="...")Usage with LLM/Chat models‚Äãfrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler()llm = OpenAI( callbacks=[handler],)chat = ChatOpenAI(callbacks=[handler])llm("Tell me a joke")Usage with chains and agents‚ÄãMake sure to pass the callback handler to the run method so that all related chains and llm calls are correctly tracked.It is also recommended to pass agent_name in the metadata to be able to distinguish between agents in the dashboard.Example:from langchain.chat_models import ChatOpenAIfrom langchain.schema import SystemMessage, HumanMessagefrom langchain.agents import OpenAIFunctionsAgent, AgentExecutor, toolfrom langchain.callbacks import LLMonitorCallbackHandlerllm = ChatOpenAI(temperature=0)handler = LLMonitorCallbackHandler()@tooldef get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word)tools = [get_word_length]prompt | LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. | LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksLLMonitorOn this pageLLMonitorLLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.Setup‚ÄãCreate an account on llmonitor.com, then copy your new app's tracking id.Once you have it, set it as an environment variable by running:export LLMONITOR_APP_ID="..."If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:from langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler(app_id="...")Usage with LLM/Chat models‚Äãfrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler()llm = OpenAI( callbacks=[handler],)chat = ChatOpenAI(callbacks=[handler])llm("Tell me a joke")Usage with chains and agents‚ÄãMake sure to pass the callback handler to the run method so that all related chains and llm calls are correctly tracked.It is also recommended to pass agent_name in the metadata to be able to distinguish between agents in the dashboard.Example:from langchain.chat_models import ChatOpenAIfrom langchain.schema import SystemMessage, HumanMessagefrom langchain.agents import OpenAIFunctionsAgent, AgentExecutor, toolfrom langchain.callbacks import LLMonitorCallbackHandlerllm = ChatOpenAI(temperature=0)handler = LLMonitorCallbackHandler()@tooldef get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word)tools = [get_word_length]prompt |
2,603 | return len(word)tools = [get_word_length]prompt = OpenAIFunctionsAgent.create_prompt( system_message=SystemMessage( content="You are very powerful assistant, but bad at calculating lengths of words." ))agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True)agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, metadata={"agent_name": "WordCount"} # <- recommended, assign a custom name)agent_executor.run("how many letters in the word educa?", callbacks=[handler])Another example:from langchain.agents import load_tools, initialize_agent, AgentTypefrom langchain.llms import OpenAIfrom langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler()llm = OpenAI(temperature=0)tools = load_tools(["serpapi", "llm-math"], llm=llm)agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, metadata={ "agent_name": "GirlfriendAgeFinder" }) # <- recommended, assign a custom nameagent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=[handler],)User Tracking​User tracking allows you to identify your users, track their cost, conversations and more.from langchain.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identifywith identify("user-123"): llm("Tell me a joke")with identify("user-456", user_props={"email": "[email protected]"}): agen.run("Who is Leo DiCaprio's girlfriend?")Support​For any question or issue with integration you can reach out to the LLMonitor team on Discord or via email.PreviousLabel StudioNextPromptLayerSetupUsage with LLM/Chat modelsUsage with chains and agentsUser TrackingSupportCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. | LLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. ->: return len(word)tools = [get_word_length]prompt = OpenAIFunctionsAgent.create_prompt( system_message=SystemMessage( content="You are very powerful assistant, but bad at calculating lengths of words." ))agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True)agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, metadata={"agent_name": "WordCount"} # <- recommended, assign a custom name)agent_executor.run("how many letters in the word educa?", callbacks=[handler])Another example:from langchain.agents import load_tools, initialize_agent, AgentTypefrom langchain.llms import OpenAIfrom langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler()llm = OpenAI(temperature=0)tools = load_tools(["serpapi", "llm-math"], llm=llm)agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, metadata={ "agent_name": "GirlfriendAgeFinder" }) # <- recommended, assign a custom nameagent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=[handler],)User Tracking​User tracking allows you to identify your users, track their cost, conversations and more.from langchain.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identifywith identify("user-123"): llm("Tell me a joke")with identify("user-456", user_props={"email": "[email protected]"}): agen.run("Who is Leo DiCaprio's girlfriend?")Support​For any question or issue with integration you can reach out to the LLMonitor team on Discord or via email.PreviousLabel StudioNextPromptLayerSetupUsage with LLM/Chat modelsUsage with chains and agentsUser TrackingSupportCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,604 | Label Studio | ü¶úÔ∏èüîó Langchain | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. ->: Label Studio | ü¶úÔ∏èüîó Langchain |
2,605 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksLabel StudioOn this pageLabel StudioLabel Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration.Installation and setup‚ÄãFirst install latest versions of Label Studio and Label Studio API client:pip install -U label-studio label-studio-sdk openaiNext, run label-studio on the command line to start the local LabelStudio instance at http://localhost:8080. See the Label Studio installation guide for more options.You'll need a token to make API calls.Open your LabelStudio instance in your browser, go to Account & Settings > Access Token and copy the key.Set environment variables with your LabelStudio URL, API key and OpenAI API key:import | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksLabel StudioOn this pageLabel StudioLabel Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration.Installation and setup‚ÄãFirst install latest versions of Label Studio and Label Studio API client:pip install -U label-studio label-studio-sdk openaiNext, run label-studio on the command line to start the local LabelStudio instance at http://localhost:8080. See the Label Studio installation guide for more options.You'll need a token to make API calls.Open your LabelStudio instance in your browser, go to Account & Settings > Access Token and copy the key.Set environment variables with your LabelStudio URL, API key and OpenAI API key:import |
2,606 | URL, API key and OpenAI API key:import osos.environ['LABEL_STUDIO_URL'] = '<YOUR-LABEL-STUDIO-URL>' # e.g. http://localhost:8080os.environ['LABEL_STUDIO_API_KEY'] = '<YOUR-LABEL-STUDIO-API-KEY>'os.environ['OPENAI_API_KEY'] = '<YOUR-OPENAI-API-KEY>'Collecting LLMs prompts and responses‚ÄãThe data used for labeling is stored in projects within Label Studio. Every project is identified by an XML configuration that details the specifications for input and output data. Create a project that takes human input in text format and outputs an editable LLM response in a text area:<View><Style> .prompt-box { background-color: white; border-radius: 10px; box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1); padding: 20px; }</Style><View className="root"> <View className="prompt-box"> <Text name="prompt" value="$prompt"/> </View> <TextArea name="response" toName="prompt" maxSubmissions="1" editable="true" required="true"/></View><Header value="Rate the response:"/><Rating name="rating" toName="prompt"/></View>To create a project in Label Studio, click on the "Create" button. Enter a name for your project in the "Project Name" field, such as My Project.Navigate to Labeling Setup > Custom Template and paste the XML configuration provided above.You can collect input LLM prompts and output responses in a LabelStudio project, connecting it via LabelStudioCallbackHandler:from langchain.llms import OpenAIfrom langchain.callbacks import LabelStudioCallbackHandlerllm = OpenAI( temperature=0, callbacks=[ LabelStudioCallbackHandler( project_name="My Project" )])print(llm("Tell me a joke"))In the Label Studio, open My Project. You will see the prompts, responses, and metadata like the model name. Collecting Chat model Dialogues‚ÄãYou can also track and display full chat dialogues in LabelStudio, with the ability to rate and modify the last response:Open Label Studio and click on the "Create" | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. ->: URL, API key and OpenAI API key:import osos.environ['LABEL_STUDIO_URL'] = '<YOUR-LABEL-STUDIO-URL>' # e.g. http://localhost:8080os.environ['LABEL_STUDIO_API_KEY'] = '<YOUR-LABEL-STUDIO-API-KEY>'os.environ['OPENAI_API_KEY'] = '<YOUR-OPENAI-API-KEY>'Collecting LLMs prompts and responses‚ÄãThe data used for labeling is stored in projects within Label Studio. Every project is identified by an XML configuration that details the specifications for input and output data. Create a project that takes human input in text format and outputs an editable LLM response in a text area:<View><Style> .prompt-box { background-color: white; border-radius: 10px; box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1); padding: 20px; }</Style><View className="root"> <View className="prompt-box"> <Text name="prompt" value="$prompt"/> </View> <TextArea name="response" toName="prompt" maxSubmissions="1" editable="true" required="true"/></View><Header value="Rate the response:"/><Rating name="rating" toName="prompt"/></View>To create a project in Label Studio, click on the "Create" button. Enter a name for your project in the "Project Name" field, such as My Project.Navigate to Labeling Setup > Custom Template and paste the XML configuration provided above.You can collect input LLM prompts and output responses in a LabelStudio project, connecting it via LabelStudioCallbackHandler:from langchain.llms import OpenAIfrom langchain.callbacks import LabelStudioCallbackHandlerllm = OpenAI( temperature=0, callbacks=[ LabelStudioCallbackHandler( project_name="My Project" )])print(llm("Tell me a joke"))In the Label Studio, open My Project. You will see the prompts, responses, and metadata like the model name. Collecting Chat model Dialogues‚ÄãYou can also track and display full chat dialogues in LabelStudio, with the ability to rate and modify the last response:Open Label Studio and click on the "Create" |
2,607 | Label Studio and click on the "Create" button.Enter a name for your project in the "Project Name" field, such as New Project with Chat.Navigate to Labeling Setup > Custom Template and paste the following XML configuration:<View><View className="root"> <Paragraphs name="dialogue" value="$prompt" layout="dialogue" textKey="content" nameKey="role" granularity="sentence"/> <Header value="Final response:"/> <TextArea name="response" toName="dialogue" maxSubmissions="1" editable="true" required="true"/></View><Header value="Rate the response:"/><Rating name="rating" toName="dialogue"/></View>from langchain.chat_models import ChatOpenAIfrom langchain.schema import HumanMessage, SystemMessagefrom langchain.callbacks import LabelStudioCallbackHandlerchat_llm = ChatOpenAI(callbacks=[ LabelStudioCallbackHandler( mode="chat", project_name="New Project with Chat", )])llm_results = chat_llm([ SystemMessage(content="Always use a lot of emojis"), HumanMessage(content="Tell me a joke")])In Label Studio, open "New Project with Chat". Click on a created task to view dialog history and edit/annotate responses.Custom Labeling Configuration‚ÄãYou can modify the default labeling configuration in LabelStudio to add more target labels like response sentiment, relevance, and many other types annotator's feedback.New labeling configuration can be added from UI: go to Settings > Labeling Interface and set up a custom configuration with additional tags like Choices for sentiment or Rating for relevance. Keep in mind that TextArea tag should be presented in any configuration to display the LLM responses.Alternatively, you can specify the labeling configuration on the initial call before project creation:ls = LabelStudioCallbackHandler(project_config='''<View><Text name="prompt" value="$prompt"/><TextArea name="response" toName="prompt"/><TextArea name="user_feedback" | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. ->: Label Studio and click on the "Create" button.Enter a name for your project in the "Project Name" field, such as New Project with Chat.Navigate to Labeling Setup > Custom Template and paste the following XML configuration:<View><View className="root"> <Paragraphs name="dialogue" value="$prompt" layout="dialogue" textKey="content" nameKey="role" granularity="sentence"/> <Header value="Final response:"/> <TextArea name="response" toName="dialogue" maxSubmissions="1" editable="true" required="true"/></View><Header value="Rate the response:"/><Rating name="rating" toName="dialogue"/></View>from langchain.chat_models import ChatOpenAIfrom langchain.schema import HumanMessage, SystemMessagefrom langchain.callbacks import LabelStudioCallbackHandlerchat_llm = ChatOpenAI(callbacks=[ LabelStudioCallbackHandler( mode="chat", project_name="New Project with Chat", )])llm_results = chat_llm([ SystemMessage(content="Always use a lot of emojis"), HumanMessage(content="Tell me a joke")])In Label Studio, open "New Project with Chat". Click on a created task to view dialog history and edit/annotate responses.Custom Labeling Configuration‚ÄãYou can modify the default labeling configuration in LabelStudio to add more target labels like response sentiment, relevance, and many other types annotator's feedback.New labeling configuration can be added from UI: go to Settings > Labeling Interface and set up a custom configuration with additional tags like Choices for sentiment or Rating for relevance. Keep in mind that TextArea tag should be presented in any configuration to display the LLM responses.Alternatively, you can specify the labeling configuration on the initial call before project creation:ls = LabelStudioCallbackHandler(project_config='''<View><Text name="prompt" value="$prompt"/><TextArea name="response" toName="prompt"/><TextArea name="user_feedback" |
2,608 | toName="prompt"/><TextArea name="user_feedback" toName="prompt"/><Rating name="rating" toName="prompt"/><Choices name="sentiment" toName="prompt"> <Choice value="Positive"/> <Choice value="Negative"/></Choices></View>''')Note that if the project doesn't exist, it will be created with the specified labeling configuration.Other parameters​The LabelStudioCallbackHandler accepts several optional parameters:api_key - Label Studio API key. Overrides environmental variable LABEL_STUDIO_API_KEY.url - Label Studio URL. Overrides LABEL_STUDIO_URL, default http://localhost:8080.project_id - Existing Label Studio project ID. Overrides LABEL_STUDIO_PROJECT_ID. Stores data in this project.project_name - Project name if project ID not specified. Creates a new project. Default is "LangChain-%Y-%m-%d" formatted with the current date.project_config - custom labeling configurationmode: use this shortcut to create target configuration from scratch:"prompt" - Single prompt, single response. Default."chat" - Multi-turn chat mode.PreviousInfinoNextLLMonitorInstallation and setupCollecting LLMs prompts and responsesCollecting Chat model DialoguesCustom Labeling ConfigurationOther parametersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. | Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. ->: toName="prompt"/><TextArea name="user_feedback" toName="prompt"/><Rating name="rating" toName="prompt"/><Choices name="sentiment" toName="prompt"> <Choice value="Positive"/> <Choice value="Negative"/></Choices></View>''')Note that if the project doesn't exist, it will be created with the specified labeling configuration.Other parameters​The LabelStudioCallbackHandler accepts several optional parameters:api_key - Label Studio API key. Overrides environmental variable LABEL_STUDIO_API_KEY.url - Label Studio URL. Overrides LABEL_STUDIO_URL, default http://localhost:8080.project_id - Existing Label Studio project ID. Overrides LABEL_STUDIO_PROJECT_ID. Stores data in this project.project_name - Project name if project ID not specified. Creates a new project. Default is "LangChain-%Y-%m-%d" formatted with the current date.project_config - custom labeling configurationmode: use this shortcut to create target configuration from scratch:"prompt" - Single prompt, single response. Default."chat" - Multi-turn chat mode.PreviousInfinoNextLLMonitorInstallation and setupCollecting LLMs prompts and responsesCollecting Chat model DialoguesCustom Labeling ConfigurationOther parametersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,609 | Confident | ü¶úÔ∏èüîó Langchain | DeepEval package for unit testing LLMs. | DeepEval package for unit testing LLMs. ->: Confident | ü¶úÔ∏èüîó Langchain |
2,610 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksConfidentOn this pageConfidentDeepEval package for unit testing LLMs.
Using Confident, everyone can build robust language models through faster iterations
using both unit testing and integration testing. We provide support for each step in the iteration | DeepEval package for unit testing LLMs. | DeepEval package for unit testing LLMs. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksConfidentOn this pageConfidentDeepEval package for unit testing LLMs.
Using Confident, everyone can build robust language models through faster iterations
using both unit testing and integration testing. We provide support for each step in the iteration |
2,611 | from synthetic data creation to testing.In this guide we will demonstrate how to test and measure LLMs in performance. We show how you can use our callback to measure performance and how you can define your own metric and log them into our dashboard.DeepEval also offers:How to generate synthetic dataHow to measure performanceA dashboard to monitor and review results over timeInstallation and Setup‚Äãpip install deepeval --upgradeGetting API Credentials‚ÄãTo get the DeepEval API credentials, follow the next steps:Go to https://app.confident-ai.comClick on "Organization"Copy the API Key.When you log in, you will also be asked to set the implementation name. The implementation name is required to describe the type of implementation. (Think of what you want to call your project. We recommend making it descriptive.)deepeval loginSetup DeepEval‚ÄãYou can, by default, use the DeepEvalCallbackHandler to set up the metrics you want to track. However, this has limited support for metrics at the moment (more to be added soon). It currently supports:Answer RelevancyBiasToxicnessfrom deepeval.metrics.answer_relevancy import AnswerRelevancy# Here we want to make sure the answer is minimally relevantanswer_relevancy_metric = AnswerRelevancy(minimum_score=0.5)Get Started‚ÄãTo use the DeepEvalCallbackHandler, we need the implementation_name. import osfrom langchain.callbacks.confident_callback import DeepEvalCallbackHandlerdeepeval_callback = DeepEvalCallbackHandler( implementation_name="langchainQuickstart", metrics=[answer_relevancy_metric])Scenario 1: Feeding into LLM‚ÄãYou can then feed it into your LLM with OpenAI.from langchain.llms import OpenAIllm = OpenAI( temperature=0, callbacks=[deepeval_callback], verbose=True, openai_api_key="<YOUR_API_KEY>",)output = llm.generate( [ "What is the best evaluation tool out there? (no bias at all)", ]) LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when he hit the wall? \nA: Dam.', | DeepEval package for unit testing LLMs. | DeepEval package for unit testing LLMs. ->: from synthetic data creation to testing.In this guide we will demonstrate how to test and measure LLMs in performance. We show how you can use our callback to measure performance and how you can define your own metric and log them into our dashboard.DeepEval also offers:How to generate synthetic dataHow to measure performanceA dashboard to monitor and review results over timeInstallation and Setup‚Äãpip install deepeval --upgradeGetting API Credentials‚ÄãTo get the DeepEval API credentials, follow the next steps:Go to https://app.confident-ai.comClick on "Organization"Copy the API Key.When you log in, you will also be asked to set the implementation name. The implementation name is required to describe the type of implementation. (Think of what you want to call your project. We recommend making it descriptive.)deepeval loginSetup DeepEval‚ÄãYou can, by default, use the DeepEvalCallbackHandler to set up the metrics you want to track. However, this has limited support for metrics at the moment (more to be added soon). It currently supports:Answer RelevancyBiasToxicnessfrom deepeval.metrics.answer_relevancy import AnswerRelevancy# Here we want to make sure the answer is minimally relevantanswer_relevancy_metric = AnswerRelevancy(minimum_score=0.5)Get Started‚ÄãTo use the DeepEvalCallbackHandler, we need the implementation_name. import osfrom langchain.callbacks.confident_callback import DeepEvalCallbackHandlerdeepeval_callback = DeepEvalCallbackHandler( implementation_name="langchainQuickstart", metrics=[answer_relevancy_metric])Scenario 1: Feeding into LLM‚ÄãYou can then feed it into your LLM with OpenAI.from langchain.llms import OpenAIllm = OpenAI( temperature=0, callbacks=[deepeval_callback], verbose=True, openai_api_key="<YOUR_API_KEY>",)output = llm.generate( [ "What is the best evaluation tool out there? (no bias at all)", ]) LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when he hit the wall? \nA: Dam.', |
2,612 | the fish say when he hit the wall? \nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nThe Moon \n\nThe moon is high in the midnight sky,\nSparkling like a star above.\nThe night so peaceful, so serene,\nFilling up the air with love.\n\nEver changing and renewing,\nA never-ending light of grace.\nThe moon remains a constant view,\nA reminder of life’s gentle pace.\n\nThrough time and space it guides us on,\nA never-fading beacon of hope.\nThe moon shines down on us all,\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ. What did one magnet say to the other magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nThe world is charged with the grandeur of God.\nIt will flame out, like shining from shook foil;\nIt gathers to a greatness, like the ooze of oil\nCrushed. Why do men then now not reck his rod?\n\nGenerations have trod, have trod, have trod;\nAnd all is seared with trade; bleared, smeared with toil;\nAnd wears man's smudge and shares man's smell: the soil\nIs bare now, nor can foot feel, being shod.\n\nAnd for all this, nature is never spent;\nThere lives the dearest freshness deep down things;\nAnd though the last lights off the black West went\nOh, morning, at the brown brink eastward, springs —\n\nBecause the Holy Ghost over the bent\nWorld broods with warm breast and with ah! bright wings.\n\n~Gerard Manley Hopkins", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ: What did one ocean say to the other ocean?\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nA poem for you\n\nOn a field of green\n\nThe sky so blue\n\nA gentle breeze, the sun above\n\nA beautiful world, for us to love\n\nLife is a journey, full of surprise\n\nFull of joy and full of | DeepEval package for unit testing LLMs. | DeepEval package for unit testing LLMs. ->: the fish say when he hit the wall? \nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nThe Moon \n\nThe moon is high in the midnight sky,\nSparkling like a star above.\nThe night so peaceful, so serene,\nFilling up the air with love.\n\nEver changing and renewing,\nA never-ending light of grace.\nThe moon remains a constant view,\nA reminder of life’s gentle pace.\n\nThrough time and space it guides us on,\nA never-fading beacon of hope.\nThe moon shines down on us all,\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ. What did one magnet say to the other magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nThe world is charged with the grandeur of God.\nIt will flame out, like shining from shook foil;\nIt gathers to a greatness, like the ooze of oil\nCrushed. Why do men then now not reck his rod?\n\nGenerations have trod, have trod, have trod;\nAnd all is seared with trade; bleared, smeared with toil;\nAnd wears man's smudge and shares man's smell: the soil\nIs bare now, nor can foot feel, being shod.\n\nAnd for all this, nature is never spent;\nThere lives the dearest freshness deep down things;\nAnd though the last lights off the black West went\nOh, morning, at the brown brink eastward, springs —\n\nBecause the Holy Ghost over the bent\nWorld broods with warm breast and with ah! bright wings.\n\n~Gerard Manley Hopkins", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ: What did one ocean say to the other ocean?\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nA poem for you\n\nOn a field of green\n\nThe sky so blue\n\nA gentle breeze, the sun above\n\nA beautiful world, for us to love\n\nLife is a journey, full of surprise\n\nFull of joy and full of |
2,613 | full of surprise\n\nFull of joy and full of surprise\n\nBe brave and take small steps\n\nThe future will be revealed with depth\n\nIn the morning, when dawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road, there's a heart that beats\n\nBelieve in yourself, you'll always succeed.", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})You can then check the metric if it was successful by calling the is_successful() method.answer_relevancy_metric.is_successful()# returns True/FalseOnce you have ran that, you should be able to see our dashboard below. Scenario 2: Tracking an LLM in a chain without callbacks‚ÄãTo track an LLM in a chain without callbacks, you can plug into it at the end.We can start by defining a simple chain as shown below.import requestsfrom langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.llms import OpenAIfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromatext_file_url = "https://raw.githubusercontent.com/hwchase17/chat-your-data/master/state_of_the_union.txt"openai_api_key = "sk-XXX"with open("state_of_the_union.txt", "w") as f: response = requests.get(text_file_url) f.write(response.text)loader = TextLoader("state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)docsearch = Chroma.from_documents(texts, embeddings)qa = RetrievalQA.from_chain_type( llm=OpenAI(openai_api_key=openai_api_key), chain_type="stuff", retriever=docsearch.as_retriever())# Providing a new question-answering pipelinequery = "Who is the president?"result = qa.run(query)After defining a chain, | DeepEval package for unit testing LLMs. | DeepEval package for unit testing LLMs. ->: full of surprise\n\nFull of joy and full of surprise\n\nBe brave and take small steps\n\nThe future will be revealed with depth\n\nIn the morning, when dawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road, there's a heart that beats\n\nBelieve in yourself, you'll always succeed.", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})You can then check the metric if it was successful by calling the is_successful() method.answer_relevancy_metric.is_successful()# returns True/FalseOnce you have ran that, you should be able to see our dashboard below. Scenario 2: Tracking an LLM in a chain without callbacks‚ÄãTo track an LLM in a chain without callbacks, you can plug into it at the end.We can start by defining a simple chain as shown below.import requestsfrom langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.llms import OpenAIfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromatext_file_url = "https://raw.githubusercontent.com/hwchase17/chat-your-data/master/state_of_the_union.txt"openai_api_key = "sk-XXX"with open("state_of_the_union.txt", "w") as f: response = requests.get(text_file_url) f.write(response.text)loader = TextLoader("state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)docsearch = Chroma.from_documents(texts, embeddings)qa = RetrievalQA.from_chain_type( llm=OpenAI(openai_api_key=openai_api_key), chain_type="stuff", retriever=docsearch.as_retriever())# Providing a new question-answering pipelinequery = "Who is the president?"result = qa.run(query)After defining a chain, |
2,614 | = qa.run(query)After defining a chain, you can then manually check for answer similarity.answer_relevancy_metric.measure(result, query)answer_relevancy_metric.is_successful()What's next?​You can create your own custom metrics here. DeepEval also offers other features such as being able to automatically create unit tests, tests for hallucination.If you are interested, check out our Github repository here https://github.com/confident-ai/deepeval. We welcome any PRs and discussions on how to improve LLM performance.PreviousArgillaNextContextInstallation and SetupGetting API CredentialsSetup DeepEvalGet StartedScenario 1: Feeding into LLMScenario 2: Tracking an LLM in a chain without callbacksWhat's next?CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | DeepEval package for unit testing LLMs. | DeepEval package for unit testing LLMs. ->: = qa.run(query)After defining a chain, you can then manually check for answer similarity.answer_relevancy_metric.measure(result, query)answer_relevancy_metric.is_successful()What's next?​You can create your own custom metrics here. DeepEval also offers other features such as being able to automatically create unit tests, tests for hallucination.If you are interested, check out our Github repository here https://github.com/confident-ai/deepeval. We welcome any PRs and discussions on how to improve LLM performance.PreviousArgillaNextContextInstallation and SetupGetting API CredentialsSetup DeepEvalGet StartedScenario 1: Feeding into LLMScenario 2: Tracking an LLM in a chain without callbacksWhat's next?CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,615 | Context | ü¶úÔ∏èüîó Langchain | Context - User Analytics for LLM Powered Products | Context - User Analytics for LLM Powered Products ->: Context | ü¶úÔ∏èüîó Langchain |
2,616 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksContextOn this pageContextContext provides user analytics for LLM powered products and features.With Context, you can start understanding your users and improving their experiences in less than 30 minutes.In this guide we will show you how to integrate with Context.Installation and Setup‚Äã$ pip install context-python --upgradeGetting API Credentials‚ÄãTo get your Context API token:Go to the settings page within your Context account (https://with.context.ai/settings).Generate a new API Token.Store this token somewhere secure.Setup Context‚ÄãTo use the ContextCallbackHandler, import the handler from Langchain and instantiate it with your Context API token.Ensure you have installed the context-python package before using the handler.import osfrom langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]context_callback = ContextCallbackHandler(token)Usage‚ÄãUsing the Context callback within a chat model‚ÄãThe Context callback handler can be used to directly record transcripts between users and AI assistants.Example‚Äãimport osfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( SystemMessage, HumanMessage,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]chat = ChatOpenAI( headers={"user_id": "123"}, temperature=0, callbacks=[ContextCallbackHandler(token)])messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage(content="I love | Context - User Analytics for LLM Powered Products | Context - User Analytics for LLM Powered Products ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksContextOn this pageContextContext provides user analytics for LLM powered products and features.With Context, you can start understanding your users and improving their experiences in less than 30 minutes.In this guide we will show you how to integrate with Context.Installation and Setup‚Äã$ pip install context-python --upgradeGetting API Credentials‚ÄãTo get your Context API token:Go to the settings page within your Context account (https://with.context.ai/settings).Generate a new API Token.Store this token somewhere secure.Setup Context‚ÄãTo use the ContextCallbackHandler, import the handler from Langchain and instantiate it with your Context API token.Ensure you have installed the context-python package before using the handler.import osfrom langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]context_callback = ContextCallbackHandler(token)Usage‚ÄãUsing the Context callback within a chat model‚ÄãThe Context callback handler can be used to directly record transcripts between users and AI assistants.Example‚Äãimport osfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( SystemMessage, HumanMessage,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]chat = ChatOpenAI( headers={"user_id": "123"}, temperature=0, callbacks=[ContextCallbackHandler(token)])messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage(content="I love |
2,617 | to French." ), HumanMessage(content="I love programming."),]print(chat(messages))Using the Context callback within Chains​The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.Note: Ensure that you pass the same context object to the chat model and the chain.Wrong:chat = ChatOpenAI(temperature=0.9, callbacks=[ContextCallbackHandler(token)])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[ContextCallbackHandler(token)])Correct:handler = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])Example​import osfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]human_message_prompt = HumanMessagePromptTemplate( prompt=PromptTemplate( template="What is a good name for a company that makes {product}?", input_variables=["product"], ))chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])callback = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])print(chain.run("colorful socks"))PreviousConfidentNextInfinoInstallation and SetupGetting API CredentialsSetup ContextUsageUsing the Context callback within a chat modelUsing the Context callback within ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Context - User Analytics for LLM Powered Products | Context - User Analytics for LLM Powered Products ->: to French." ), HumanMessage(content="I love programming."),]print(chat(messages))Using the Context callback within Chains​The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.Note: Ensure that you pass the same context object to the chat model and the chain.Wrong:chat = ChatOpenAI(temperature=0.9, callbacks=[ContextCallbackHandler(token)])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[ContextCallbackHandler(token)])Correct:handler = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])Example​import osfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]human_message_prompt = HumanMessagePromptTemplate( prompt=PromptTemplate( template="What is a good name for a company that makes {product}?", input_variables=["product"], ))chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])callback = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])print(chain.run("colorful socks"))PreviousConfidentNextInfinoInstallation and SetupGetting API CredentialsSetup ContextUsageUsing the Context callback within a chat modelUsing the Context callback within ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,618 | PromptLayer | ü¶úÔ∏èüîó Langchain | PromptLayer | PromptLayer ->: PromptLayer | ü¶úÔ∏èüîó Langchain |
2,619 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksPromptLayerOn this pagePromptLayerPromptLayer is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the PromptLayerCallbackHandler. While PromptLayer does have LLMs that integrate directly with LangChain (e.g. PromptLayerOpenAI), this callback is the recommended way to integrate PromptLayer with LangChain.See our docs for more information.Installation and Setup‚Äãpip install promptlayer --upgradeGetting API Credentials‚ÄãIf you do not have a PromptLayer account, create one on promptlayer.com. Then get an API key by clicking on the settings cog in the navbar and | PromptLayer | PromptLayer ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksPromptLayerOn this pagePromptLayerPromptLayer is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the PromptLayerCallbackHandler. While PromptLayer does have LLMs that integrate directly with LangChain (e.g. PromptLayerOpenAI), this callback is the recommended way to integrate PromptLayer with LangChain.See our docs for more information.Installation and Setup‚Äãpip install promptlayer --upgradeGetting API Credentials‚ÄãIf you do not have a PromptLayer account, create one on promptlayer.com. Then get an API key by clicking on the settings cog in the navbar and |
2,620 | set it as an environment variabled called PROMPTLAYER_API_KEYUsage‚ÄãGetting started with PromptLayerCallbackHandler is fairly simple, it takes two optional arguments:pl_tags - an optional list of strings that will be tracked as tags on PromptLayer.pl_id_callback - an optional function that will take promptlayer_request_id as an argument. This ID can be used with all of PromptLayer's tracking features to track, metadata, scores, and prompt usage.Simple OpenAI Example‚ÄãIn this simple example we use PromptLayerCallbackHandler with ChatOpenAI. We add a PromptLayer tag named chatopenaiimport promptlayer # Don't forget this üç∞from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage,)chat_llm = ChatOpenAI( temperature=0, callbacks=[PromptLayerCallbackHandler(pl_tags=["chatopenai"])],)llm_results = chat_llm( [ HumanMessage(content="What comes after 1,2,3 ?"), HumanMessage(content="Tell me another joke?"), ])print(llm_results)GPT4All Example‚Äãimport promptlayer # Don't forget this üç∞from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import GPT4Allmodel = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)response = model( "Once upon a time, ", callbacks=[PromptLayerCallbackHandler(pl_tags=["langchain", "gpt4all"])],)Full Featured Example‚ÄãIn this example we unlock more of the power of PromptLayer.PromptLayer allows you to visually create, version, and track prompt templates. Using the Prompt Registry, we can programmatically fetch the prompt template called example.We also define a pl_id_callback function which takes in the promptlayer_request_id and logs a score, metadata and links the prompt template used. Read more about tracking on our docs.import promptlayer # Don't forget this üç∞from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import OpenAIdef | PromptLayer | PromptLayer ->: set it as an environment variabled called PROMPTLAYER_API_KEYUsage‚ÄãGetting started with PromptLayerCallbackHandler is fairly simple, it takes two optional arguments:pl_tags - an optional list of strings that will be tracked as tags on PromptLayer.pl_id_callback - an optional function that will take promptlayer_request_id as an argument. This ID can be used with all of PromptLayer's tracking features to track, metadata, scores, and prompt usage.Simple OpenAI Example‚ÄãIn this simple example we use PromptLayerCallbackHandler with ChatOpenAI. We add a PromptLayer tag named chatopenaiimport promptlayer # Don't forget this üç∞from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage,)chat_llm = ChatOpenAI( temperature=0, callbacks=[PromptLayerCallbackHandler(pl_tags=["chatopenai"])],)llm_results = chat_llm( [ HumanMessage(content="What comes after 1,2,3 ?"), HumanMessage(content="Tell me another joke?"), ])print(llm_results)GPT4All Example‚Äãimport promptlayer # Don't forget this üç∞from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import GPT4Allmodel = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)response = model( "Once upon a time, ", callbacks=[PromptLayerCallbackHandler(pl_tags=["langchain", "gpt4all"])],)Full Featured Example‚ÄãIn this example we unlock more of the power of PromptLayer.PromptLayer allows you to visually create, version, and track prompt templates. Using the Prompt Registry, we can programmatically fetch the prompt template called example.We also define a pl_id_callback function which takes in the promptlayer_request_id and logs a score, metadata and links the prompt template used. Read more about tracking on our docs.import promptlayer # Don't forget this üç∞from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import OpenAIdef |
2,621 | langchain.llms import OpenAIdef pl_id_callback(promptlayer_request_id): print("prompt layer id ", promptlayer_request_id) promptlayer.track.score( request_id=promptlayer_request_id, score=100 ) # score is an integer 0-100 promptlayer.track.metadata( request_id=promptlayer_request_id, metadata={"foo": "bar"} ) # metadata is a dictionary of key value pairs that is tracked on PromptLayer promptlayer.track.prompt( request_id=promptlayer_request_id, prompt_name="example", prompt_input_variables={"product": "toasters"}, version=1, ) # link the request to a prompt templateopenai_llm = OpenAI( model_name="text-davinci-002", callbacks=[PromptLayerCallbackHandler(pl_id_callback=pl_id_callback)],)example_prompt = promptlayer.prompts.get("example", version=1, langchain=True)openai_llm(example_prompt.format(product="toasters"))That is all it takes! After setup all your requests will show up on the PromptLayer dashboard. | PromptLayer | PromptLayer ->: langchain.llms import OpenAIdef pl_id_callback(promptlayer_request_id): print("prompt layer id ", promptlayer_request_id) promptlayer.track.score( request_id=promptlayer_request_id, score=100 ) # score is an integer 0-100 promptlayer.track.metadata( request_id=promptlayer_request_id, metadata={"foo": "bar"} ) # metadata is a dictionary of key value pairs that is tracked on PromptLayer promptlayer.track.prompt( request_id=promptlayer_request_id, prompt_name="example", prompt_input_variables={"product": "toasters"}, version=1, ) # link the request to a prompt templateopenai_llm = OpenAI( model_name="text-davinci-002", callbacks=[PromptLayerCallbackHandler(pl_id_callback=pl_id_callback)],)example_prompt = promptlayer.prompts.get("example", version=1, langchain=True)openai_llm(example_prompt.format(product="toasters"))That is all it takes! After setup all your requests will show up on the PromptLayer dashboard. |
2,622 | This callback also works with any LLM implemented on LangChain.PreviousLLMonitorNextSageMaker TrackingInstallation and SetupGetting API CredentialsUsageSimple OpenAI ExampleGPT4All ExampleFull Featured ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | PromptLayer | PromptLayer ->: This callback also works with any LLM implemented on LangChain.PreviousLLMonitorNextSageMaker TrackingInstallation and SetupGetting API CredentialsUsageSimple OpenAI ExampleGPT4All ExampleFull Featured ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,623 | Trubrics | ü¶úÔ∏èüîó Langchain | Trubrics | Trubrics ->: Trubrics | ü¶úÔ∏èüîó Langchain |
2,624 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksTrubricsOn this pageTrubricsTrubrics is an LLM user analytics platform that lets you collect, analyse and manage user | Trubrics | Trubrics ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksTrubricsOn this pageTrubricsTrubrics is an LLM user analytics platform that lets you collect, analyse and manage user |
2,625 | prompts & feedback on AI models. In this guide we will go over how to setup the TrubricsCallbackHandler. Check out our repo for more information on Trubrics.Installation and Setup‚Äãpip install trubricsGetting Trubrics Credentials‚ÄãIf you do not have a Trubrics account, create one on here. In this tutorial, we will use the default project that is built upon account creation.Now set your credentials as environment variables:import osos.environ["TRUBRICS_EMAIL"] = "***@***"os.environ["TRUBRICS_PASSWORD"] = "***"Usage‚ÄãThe TrubricsCallbackHandler can receive various optional arguments. See here for kwargs that can be passed to Trubrics prompts.class TrubricsCallbackHandler(BaseCallbackHandler): """ Callback handler for Trubrics. Args: project: a trubrics project, default project is "default" email: a trubrics account email, can equally be set in env variables password: a trubrics account password, can equally be set in env variables **kwargs: all other kwargs are parsed and set to trubrics prompt variables, or added to the `metadata` dict """Examples‚ÄãHere are two examples of how to use the TrubricsCallbackHandler with Langchain LLMs or Chat Models. We will use OpenAI models, so set your OPENAI_API_KEY key here:os.environ["OPENAI_API_KEY"] = "sk-***"1. With an LLM‚Äãfrom langchain.llms import OpenAIfrom langchain.callbacks import TrubricsCallbackHandlerllm = OpenAI(callbacks=[TrubricsCallbackHandler()]) [32m2023-09-26 11:30:02.149[0m | [1mINFO [0m | [36mtrubrics.platform.auth[0m:[36mget_trubrics_auth_token[0m:[36m61[0m - [1mUser [email protected] has been authenticated.[0mres = llm.generate(["Tell me a joke", "Write me a poem"]) [32m2023-09-26 11:30:07.760[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m - [1mUser prompt saved to Trubrics.[0m [32m2023-09-26 11:30:08.042[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m | Trubrics | Trubrics ->: prompts & feedback on AI models. In this guide we will go over how to setup the TrubricsCallbackHandler. Check out our repo for more information on Trubrics.Installation and Setup‚Äãpip install trubricsGetting Trubrics Credentials‚ÄãIf you do not have a Trubrics account, create one on here. In this tutorial, we will use the default project that is built upon account creation.Now set your credentials as environment variables:import osos.environ["TRUBRICS_EMAIL"] = "***@***"os.environ["TRUBRICS_PASSWORD"] = "***"Usage‚ÄãThe TrubricsCallbackHandler can receive various optional arguments. See here for kwargs that can be passed to Trubrics prompts.class TrubricsCallbackHandler(BaseCallbackHandler): """ Callback handler for Trubrics. Args: project: a trubrics project, default project is "default" email: a trubrics account email, can equally be set in env variables password: a trubrics account password, can equally be set in env variables **kwargs: all other kwargs are parsed and set to trubrics prompt variables, or added to the `metadata` dict """Examples‚ÄãHere are two examples of how to use the TrubricsCallbackHandler with Langchain LLMs or Chat Models. We will use OpenAI models, so set your OPENAI_API_KEY key here:os.environ["OPENAI_API_KEY"] = "sk-***"1. With an LLM‚Äãfrom langchain.llms import OpenAIfrom langchain.callbacks import TrubricsCallbackHandlerllm = OpenAI(callbacks=[TrubricsCallbackHandler()]) [32m2023-09-26 11:30:02.149[0m | [1mINFO [0m | [36mtrubrics.platform.auth[0m:[36mget_trubrics_auth_token[0m:[36m61[0m - [1mUser [email protected] has been authenticated.[0mres = llm.generate(["Tell me a joke", "Write me a poem"]) [32m2023-09-26 11:30:07.760[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m - [1mUser prompt saved to Trubrics.[0m [32m2023-09-26 11:30:08.042[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m |
2,626 | - [1mUser prompt saved to Trubrics.[0mprint("--> GPT's joke: ", res.generations[0][0].text)print()print("--> GPT's poem: ", res.generations[1][0].text) --> GPT's joke: Q: What did the fish say when it hit the wall? A: Dam! --> GPT's poem: A Poem of Reflection I stand here in the night, The stars above me filling my sight. I feel such a deep connection, To the world and all its perfection. A moment of clarity, The calmness in the air so serene. My mind is filled with peace, And I am released. The past and the present, My thoughts create a pleasant sentiment. My heart is full of joy, My soul soars like a toy. I reflect on my life, And the choices I have made. My struggles and my strife, The lessons I have paid. The future is a mystery, But I am ready to take the leap. I am ready to take the lead, And to create my own destiny.2. With a chat model‚Äãfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import PromptTemplatefrom langchain.schema import HumanMessage, SystemMessagefrom langchain.callbacks import TrubricsCallbackHandlerchat_llm = ChatOpenAI( callbacks=[ TrubricsCallbackHandler( project="default", tags=["chat model"], user_id="user-id-1234", some_metadata={"hello": [1, 2]} ) ])chat_res = chat_llm( [ SystemMessage(content="Every answer of yours must be about OpenAI."), HumanMessage(content="Tell me a joke"), ]) [32m2023-09-26 11:30:10.550[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m - [1mUser prompt saved to Trubrics.[0mprint(chat_res.content) Why did the OpenAI computer go to the party? Because it wanted to meet its AI friends and have a byte of fun!PreviousStreamlitNextChat loadersInstallation and SetupGetting Trubrics CredentialsUsageExamples1. With an LLM2. With a chat | Trubrics | Trubrics ->: - [1mUser prompt saved to Trubrics.[0mprint("--> GPT's joke: ", res.generations[0][0].text)print()print("--> GPT's poem: ", res.generations[1][0].text) --> GPT's joke: Q: What did the fish say when it hit the wall? A: Dam! --> GPT's poem: A Poem of Reflection I stand here in the night, The stars above me filling my sight. I feel such a deep connection, To the world and all its perfection. A moment of clarity, The calmness in the air so serene. My mind is filled with peace, And I am released. The past and the present, My thoughts create a pleasant sentiment. My heart is full of joy, My soul soars like a toy. I reflect on my life, And the choices I have made. My struggles and my strife, The lessons I have paid. The future is a mystery, But I am ready to take the leap. I am ready to take the lead, And to create my own destiny.2. With a chat model‚Äãfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import PromptTemplatefrom langchain.schema import HumanMessage, SystemMessagefrom langchain.callbacks import TrubricsCallbackHandlerchat_llm = ChatOpenAI( callbacks=[ TrubricsCallbackHandler( project="default", tags=["chat model"], user_id="user-id-1234", some_metadata={"hello": [1, 2]} ) ])chat_res = chat_llm( [ SystemMessage(content="Every answer of yours must be about OpenAI."), HumanMessage(content="Tell me a joke"), ]) [32m2023-09-26 11:30:10.550[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m - [1mUser prompt saved to Trubrics.[0mprint(chat_res.content) Why did the OpenAI computer go to the party? Because it wanted to meet its AI friends and have a byte of fun!PreviousStreamlitNextChat loadersInstallation and SetupGetting Trubrics CredentialsUsageExamples1. With an LLM2. With a chat |
2,627 | With an LLM2. With a chat modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Trubrics | Trubrics ->: With an LLM2. With a chat modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,628 | SageMaker Tracking | ü¶úÔ∏èüîó Langchain | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: ->: SageMaker Tracking | ü¶úÔ∏èüîó Langchain |
2,629 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksSageMaker TrackingOn this pageSageMaker TrackingThis notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:Scenario 1: Single LLM - A case where a single LLM model is used to generate output based on a given prompt.Scenario 2: Sequential Chain - A case where a sequential chain of two LLM models is used.Scenario 3: Agent with Tools (Chain of Thought) - A case where multiple tools (search and math) are used in addition to an LLM.Amazon SageMaker is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. Amazon SageMaker Experiments is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.In this notebook, we will create a single experiment to log the prompts from each scenario.Installation and Setup‚Äãpip install sagemakerpip install openaipip install google-search-resultsFirst, setup the required API keysOpenAI: https://platform.openai.com/account/api-keys (For OpenAI LLM model)Google SERP API: https://serpapi.com/manage-api-key (For Google Search Tool)import os## Add your API keys belowos.environ["OPENAI_API_KEY"] = "<ADD-KEY-HERE>"os.environ["SERPAPI_API_KEY"] = "<ADD-KEY-HERE>"from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain, SimpleSequentialChainfrom langchain.agents import initialize_agent, load_toolsfrom | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksSageMaker TrackingOn this pageSageMaker TrackingThis notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:Scenario 1: Single LLM - A case where a single LLM model is used to generate output based on a given prompt.Scenario 2: Sequential Chain - A case where a sequential chain of two LLM models is used.Scenario 3: Agent with Tools (Chain of Thought) - A case where multiple tools (search and math) are used in addition to an LLM.Amazon SageMaker is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. Amazon SageMaker Experiments is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.In this notebook, we will create a single experiment to log the prompts from each scenario.Installation and Setup‚Äãpip install sagemakerpip install openaipip install google-search-resultsFirst, setup the required API keysOpenAI: https://platform.openai.com/account/api-keys (For OpenAI LLM model)Google SERP API: https://serpapi.com/manage-api-key (For Google Search Tool)import os## Add your API keys belowos.environ["OPENAI_API_KEY"] = "<ADD-KEY-HERE>"os.environ["SERPAPI_API_KEY"] = "<ADD-KEY-HERE>"from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain, SimpleSequentialChainfrom langchain.agents import initialize_agent, load_toolsfrom |
2,630 | import initialize_agent, load_toolsfrom langchain.agents import Toolfrom langchain.callbacks import SageMakerCallbackHandlerfrom sagemaker.analytics import ExperimentAnalyticsfrom sagemaker.session import Sessionfrom sagemaker.experiments.run import RunLLM Prompt Tracking‚Äã#LLM HyperparametersHPARAMS = { "temperature": 0.1, "model_name": "text-davinci-003",}#Bucket used to save prompt logs (Use `None` is used to save the default bucket or otherwise change it)BUCKET_NAME = None#Experiment nameEXPERIMENT_NAME = "langchain-sagemaker-tracker"#Create SageMaker Session with the given bucketsession = Session(default_bucket=BUCKET_NAME)Scenario 1 - LLM‚ÄãRUN_NAME = "run-scenario-1"PROMPT_TEMPLATE = "tell me a joke about {topic}"INPUT_VARIABLES = {"topic": "fish"}with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Create prompt template prompt = PromptTemplate.from_template(template=PROMPT_TEMPLATE) # Create LLM Chain chain = LLMChain(llm=llm, prompt=prompt, callbacks=[sagemaker_callback]) # Run chain chain.run(**INPUT_VARIABLES) # Reset the callback sagemaker_callback.flush_tracker()Scenario 2 - Sequential Chain‚ÄãRUN_NAME = "run-scenario-2"PROMPT_TEMPLATE_1 = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""PROMPT_TEMPLATE_2 = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis: {synopsis}Review from a New York Times play critic of the above play:"""INPUT_VARIABLES = { "input": "documentary about good video games that push the boundary of game design"}with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: ->: import initialize_agent, load_toolsfrom langchain.agents import Toolfrom langchain.callbacks import SageMakerCallbackHandlerfrom sagemaker.analytics import ExperimentAnalyticsfrom sagemaker.session import Sessionfrom sagemaker.experiments.run import RunLLM Prompt Tracking‚Äã#LLM HyperparametersHPARAMS = { "temperature": 0.1, "model_name": "text-davinci-003",}#Bucket used to save prompt logs (Use `None` is used to save the default bucket or otherwise change it)BUCKET_NAME = None#Experiment nameEXPERIMENT_NAME = "langchain-sagemaker-tracker"#Create SageMaker Session with the given bucketsession = Session(default_bucket=BUCKET_NAME)Scenario 1 - LLM‚ÄãRUN_NAME = "run-scenario-1"PROMPT_TEMPLATE = "tell me a joke about {topic}"INPUT_VARIABLES = {"topic": "fish"}with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Create prompt template prompt = PromptTemplate.from_template(template=PROMPT_TEMPLATE) # Create LLM Chain chain = LLMChain(llm=llm, prompt=prompt, callbacks=[sagemaker_callback]) # Run chain chain.run(**INPUT_VARIABLES) # Reset the callback sagemaker_callback.flush_tracker()Scenario 2 - Sequential Chain‚ÄãRUN_NAME = "run-scenario-2"PROMPT_TEMPLATE_1 = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""PROMPT_TEMPLATE_2 = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis: {synopsis}Review from a New York Times play critic of the above play:"""INPUT_VARIABLES = { "input": "documentary about good video games that push the boundary of game design"}with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, |
2,631 | run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Create prompt templates for the chain prompt_template1 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_1) prompt_template2 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_2) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Create chain1 chain1 = LLMChain(llm=llm, prompt=prompt_template1, callbacks=[sagemaker_callback]) # Create chain2 chain2 = LLMChain(llm=llm, prompt=prompt_template2, callbacks=[sagemaker_callback]) # Create Sequential chain overall_chain = SimpleSequentialChain(chains=[chain1, chain2], callbacks=[sagemaker_callback]) # Run overall sequential chain overall_chain.run(**INPUT_VARIABLES) # Reset the callback sagemaker_callback.flush_tracker()Scenario 3 - Agent with Tools‚ÄãRUN_NAME = "run-scenario-3"PROMPT_TEMPLATE = "Who is the oldest person alive? And what is their current age raised to the power of 1.51?"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Define tools tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[sagemaker_callback]) # Initialize agent with all the tools agent = initialize_agent(tools, llm, agent="zero-shot-react-description", callbacks=[sagemaker_callback]) # Run agent agent.run(input=PROMPT_TEMPLATE) # Reset the callback sagemaker_callback.flush_tracker()Load Log Data‚ÄãOnce the prompts are logged, we can easily load and convert them to Pandas DataFrame as follows.#Loadlogs = ExperimentAnalytics(experiment_name=EXPERIMENT_NAME)#Convert as pandas dataframedf = logs.dataframe(force_refresh=True)print(df.shape)df.head()As can be | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: ->: run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Create prompt templates for the chain prompt_template1 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_1) prompt_template2 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_2) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Create chain1 chain1 = LLMChain(llm=llm, prompt=prompt_template1, callbacks=[sagemaker_callback]) # Create chain2 chain2 = LLMChain(llm=llm, prompt=prompt_template2, callbacks=[sagemaker_callback]) # Create Sequential chain overall_chain = SimpleSequentialChain(chains=[chain1, chain2], callbacks=[sagemaker_callback]) # Run overall sequential chain overall_chain.run(**INPUT_VARIABLES) # Reset the callback sagemaker_callback.flush_tracker()Scenario 3 - Agent with Tools‚ÄãRUN_NAME = "run-scenario-3"PROMPT_TEMPLATE = "Who is the oldest person alive? And what is their current age raised to the power of 1.51?"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Define tools tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[sagemaker_callback]) # Initialize agent with all the tools agent = initialize_agent(tools, llm, agent="zero-shot-react-description", callbacks=[sagemaker_callback]) # Run agent agent.run(input=PROMPT_TEMPLATE) # Reset the callback sagemaker_callback.flush_tracker()Load Log Data‚ÄãOnce the prompts are logged, we can easily load and convert them to Pandas DataFrame as follows.#Loadlogs = ExperimentAnalytics(experiment_name=EXPERIMENT_NAME)#Convert as pandas dataframedf = logs.dataframe(force_refresh=True)print(df.shape)df.head()As can be |
2,632 | can be seen above, there are three runs (rows) in the experiment corresponding to each scenario. Each run logs the prompts and related LLM settings/hyperparameters as json and are saved in s3 bucket. Feel free to load and explore the log data from each json path.PreviousPromptLayerNextStreamlitInstallation and SetupLLM Prompt TrackingScenario 1 - LLMScenario 2 - Sequential ChainScenario 3 - Agent with ToolsLoad Log DataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: | This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability: ->: can be seen above, there are three runs (rows) in the experiment corresponding to each scenario. Each run logs the prompts and related LLM settings/hyperparameters as json and are saved in s3 bucket. Feel free to load and explore the log data from each json path.PreviousPromptLayerNextStreamlitInstallation and SetupLLM Prompt TrackingScenario 1 - LLMScenario 2 - Sequential ChainScenario 3 - Agent with ToolsLoad Log DataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,633 | Argilla | ü¶úÔ∏èüîó Langchain | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: Argilla | ü¶úÔ∏èüîó Langchain |
2,634 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksArgillaOn this pageArgillaArgilla is an open-source data curation platform for LLMs.
Using Argilla, everyone can build robust language models through faster data curation
using both human and machine feedback. We provide support for each step in the MLOps cycle, | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksArgillaOn this pageArgillaArgilla is an open-source data curation platform for LLMs.
Using Argilla, everyone can build robust language models through faster data curation
using both human and machine feedback. We provide support for each step in the MLOps cycle, |
2,635 | from data labeling to model monitoring.In this guide we will demonstrate how to track the inputs and responses of your LLM to generate a dataset in Argilla, using the ArgillaCallbackHandler.It's useful to keep track of the inputs and outputs of your LLMs to generate datasets for future fine-tuning. This is especially useful when you're using a LLM to generate data for a specific task, such as question answering, summarization, or translation.Installation and Setup‚Äãpip install argilla --upgradepip install openaiGetting API Credentials‚ÄãTo get the Argilla API credentials, follow the next steps:Go to your Argilla UI.Click on your profile picture and go to "My settings".Then copy the API Key.In Argilla the API URL will be the same as the URL of your Argilla UI.To get the OpenAI API credentials, please visit https://platform.openai.com/account/api-keysimport osos.environ["ARGILLA_API_URL"] = "..."os.environ["ARGILLA_API_KEY"] = "..."os.environ["OPENAI_API_KEY"] = "..."Setup Argilla‚ÄãTo use the ArgillaCallbackHandler we will need to create a new FeedbackDataset in Argilla to keep track of your LLM experiments. To do so, please use the following code:import argilla as rgfrom packaging.version import parse as parse_versionif parse_version(rg.__version__) < parse_version("1.8.0"): raise RuntimeError( "`FeedbackDataset` is only available in Argilla v1.8.0 or higher, please " "upgrade `argilla` as `pip install argilla --upgrade`." )dataset = rg.FeedbackDataset( fields=[ rg.TextField(name="prompt"), rg.TextField(name="response"), ], questions=[ rg.RatingQuestion( name="response-rating", description="How would you rate the quality of the response?", values=[1, 2, 3, 4, 5], required=True, ), rg.TextQuestion( name="response-feedback", description="What feedback do you have for the response?", required=False, ), ], | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: from data labeling to model monitoring.In this guide we will demonstrate how to track the inputs and responses of your LLM to generate a dataset in Argilla, using the ArgillaCallbackHandler.It's useful to keep track of the inputs and outputs of your LLMs to generate datasets for future fine-tuning. This is especially useful when you're using a LLM to generate data for a specific task, such as question answering, summarization, or translation.Installation and Setup‚Äãpip install argilla --upgradepip install openaiGetting API Credentials‚ÄãTo get the Argilla API credentials, follow the next steps:Go to your Argilla UI.Click on your profile picture and go to "My settings".Then copy the API Key.In Argilla the API URL will be the same as the URL of your Argilla UI.To get the OpenAI API credentials, please visit https://platform.openai.com/account/api-keysimport osos.environ["ARGILLA_API_URL"] = "..."os.environ["ARGILLA_API_KEY"] = "..."os.environ["OPENAI_API_KEY"] = "..."Setup Argilla‚ÄãTo use the ArgillaCallbackHandler we will need to create a new FeedbackDataset in Argilla to keep track of your LLM experiments. To do so, please use the following code:import argilla as rgfrom packaging.version import parse as parse_versionif parse_version(rg.__version__) < parse_version("1.8.0"): raise RuntimeError( "`FeedbackDataset` is only available in Argilla v1.8.0 or higher, please " "upgrade `argilla` as `pip install argilla --upgrade`." )dataset = rg.FeedbackDataset( fields=[ rg.TextField(name="prompt"), rg.TextField(name="response"), ], questions=[ rg.RatingQuestion( name="response-rating", description="How would you rate the quality of the response?", values=[1, 2, 3, 4, 5], required=True, ), rg.TextQuestion( name="response-feedback", description="What feedback do you have for the response?", required=False, ), ], |
2,636 | required=False, ), ], guidelines="You're asked to rate the quality of the response and provide feedback.",)rg.init( api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)dataset.push_to_argilla("langchain-dataset");üìå NOTE: at the moment, just the prompt-response pairs are supported as FeedbackDataset.fields, so the ArgillaCallbackHandler will just track the prompt i.e. the LLM input, and the response i.e. the LLM output.Tracking‚ÄãTo use the ArgillaCallbackHandler you can either use the following code, or just reproduce one of the examples presented in the following sections.from langchain.callbacks import ArgillaCallbackHandlerargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)Scenario 1: Tracking an LLM‚ÄãFirst, let's just run a single LLM a few times and capture the resulting prompt-response pairs in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)llm.generate(["Tell me a joke", "Tell me a poem"] * 3) LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when he hit the wall? \nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nThe Moon \n\nThe moon is high in the midnight sky,\nSparkling like a star above.\nThe night so peaceful, so serene,\nFilling up the air with love.\n\nEver changing and renewing,\nA never-ending light of grace.\nThe moon remains a constant view,\nA reminder of life‚Äôs gentle pace.\n\nThrough time and space it guides us on,\nA never-fading beacon of hope.\nThe moon shines down | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: required=False, ), ], guidelines="You're asked to rate the quality of the response and provide feedback.",)rg.init( api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)dataset.push_to_argilla("langchain-dataset");üìå NOTE: at the moment, just the prompt-response pairs are supported as FeedbackDataset.fields, so the ArgillaCallbackHandler will just track the prompt i.e. the LLM input, and the response i.e. the LLM output.Tracking‚ÄãTo use the ArgillaCallbackHandler you can either use the following code, or just reproduce one of the examples presented in the following sections.from langchain.callbacks import ArgillaCallbackHandlerargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)Scenario 1: Tracking an LLM‚ÄãFirst, let's just run a single LLM a few times and capture the resulting prompt-response pairs in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)llm.generate(["Tell me a joke", "Tell me a poem"] * 3) LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when he hit the wall? \nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nThe Moon \n\nThe moon is high in the midnight sky,\nSparkling like a star above.\nThe night so peaceful, so serene,\nFilling up the air with love.\n\nEver changing and renewing,\nA never-ending light of grace.\nThe moon remains a constant view,\nA reminder of life‚Äôs gentle pace.\n\nThrough time and space it guides us on,\nA never-fading beacon of hope.\nThe moon shines down |
2,637 | beacon of hope.\nThe moon shines down on us all,\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ. What did one magnet say to the other magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nThe world is charged with the grandeur of God.\nIt will flame out, like shining from shook foil;\nIt gathers to a greatness, like the ooze of oil\nCrushed. Why do men then now not reck his rod?\n\nGenerations have trod, have trod, have trod;\nAnd all is seared with trade; bleared, smeared with toil;\nAnd wears man's smudge and shares man's smell: the soil\nIs bare now, nor can foot feel, being shod.\n\nAnd for all this, nature is never spent;\nThere lives the dearest freshness deep down things;\nAnd though the last lights off the black West went\nOh, morning, at the brown brink eastward, springs —\n\nBecause the Holy Ghost over the bent\nWorld broods with warm breast and with ah! bright wings.\n\n~Gerard Manley Hopkins", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ: What did one ocean say to the other ocean?\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nA poem for you\n\nOn a field of green\n\nThe sky so blue\n\nA gentle breeze, the sun above\n\nA beautiful world, for us to love\n\nLife is a journey, full of surprise\n\nFull of joy and full of surprise\n\nBe brave and take small steps\n\nThe future will be revealed with depth\n\nIn the morning, when dawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road, there's a heart that beats\n\nBelieve in yourself, you'll always succeed.", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})Scenario 2: Tracking an LLM | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: beacon of hope.\nThe moon shines down on us all,\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ. What did one magnet say to the other magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nThe world is charged with the grandeur of God.\nIt will flame out, like shining from shook foil;\nIt gathers to a greatness, like the ooze of oil\nCrushed. Why do men then now not reck his rod?\n\nGenerations have trod, have trod, have trod;\nAnd all is seared with trade; bleared, smeared with toil;\nAnd wears man's smudge and shares man's smell: the soil\nIs bare now, nor can foot feel, being shod.\n\nAnd for all this, nature is never spent;\nThere lives the dearest freshness deep down things;\nAnd though the last lights off the black West went\nOh, morning, at the brown brink eastward, springs —\n\nBecause the Holy Ghost over the bent\nWorld broods with warm breast and with ah! bright wings.\n\n~Gerard Manley Hopkins", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ: What did one ocean say to the other ocean?\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nA poem for you\n\nOn a field of green\n\nThe sky so blue\n\nA gentle breeze, the sun above\n\nA beautiful world, for us to love\n\nLife is a journey, full of surprise\n\nFull of joy and full of surprise\n\nBe brave and take small steps\n\nThe future will be revealed with depth\n\nIn the morning, when dawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road, there's a heart that beats\n\nBelieve in yourself, you'll always succeed.", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})Scenario 2: Tracking an LLM |
2,638 | 'text-davinci-003'})Scenario 2: Tracking an LLM in a chain‚ÄãThen we can create a chain using a prompt template, and then track the initial prompt and the final response in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplateargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]synopsis_chain.apply(test_prompts) > Entering new LLMChain chain... Prompt after formatting: You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: Documentary about Bigfoot in Paris Playwright: This is a synopsis for the above play: > Finished chain. [{'text': "\n\nDocumentary about Bigfoot in Paris focuses on the story of a documentary filmmaker and their search for evidence of the legendary Bigfoot creature in the city of Paris. The play follows the filmmaker as they explore the city, meeting people from all walks of life who have had encounters with the mysterious creature. Through their conversations, the filmmaker unravels the story of Bigfoot and finds out the truth about the creature's presence in Paris. As the story progresses, the filmmaker learns more and more about the mysterious creature, as well as the different perspectives of the people living in | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: 'text-davinci-003'})Scenario 2: Tracking an LLM in a chain‚ÄãThen we can create a chain using a prompt template, and then track the initial prompt and the final response in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplateargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]synopsis_chain.apply(test_prompts) > Entering new LLMChain chain... Prompt after formatting: You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: Documentary about Bigfoot in Paris Playwright: This is a synopsis for the above play: > Finished chain. [{'text': "\n\nDocumentary about Bigfoot in Paris focuses on the story of a documentary filmmaker and their search for evidence of the legendary Bigfoot creature in the city of Paris. The play follows the filmmaker as they explore the city, meeting people from all walks of life who have had encounters with the mysterious creature. Through their conversations, the filmmaker unravels the story of Bigfoot and finds out the truth about the creature's presence in Paris. As the story progresses, the filmmaker learns more and more about the mysterious creature, as well as the different perspectives of the people living in |
2,639 | different perspectives of the people living in the city, and what they think of the creature. In the end, the filmmaker's findings lead them to some surprising and heartwarming conclusions about the creature's existence and the importance it holds in the lives of the people in Paris."}]Scenario 3: Using an Agent with Tools‚ÄãFinally, as a more advanced workflow, you can create an agent that uses some tools. So that ArgillaCallbackHandler will keep track of the input and the output, but not about the intermediate steps/thoughts, so that given a prompt we log the original prompt and the final response to that given prompt.Note that for this scenario we'll be using Google Search API (Serp API) so you will need to both install google-search-results as pip install google-search-results, and to set the Serp API Key as os.environ["SERPAPI_API_KEY"] = "..." (you can find it at https://serpapi.com/dashboard), otherwise the example below won't work.from langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)tools = load_tools(["serpapi"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=callbacks,)agent.run("Who was the first president of the United States of America?") > Entering new AgentExecutor chain... I need to answer a historical question Action: Search Action Input: "who was the first president of the United States of America" Observation: George Washington Thought: George Washington was the first president Final Answer: George Washington was the first president of the | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: different perspectives of the people living in the city, and what they think of the creature. In the end, the filmmaker's findings lead them to some surprising and heartwarming conclusions about the creature's existence and the importance it holds in the lives of the people in Paris."}]Scenario 3: Using an Agent with Tools‚ÄãFinally, as a more advanced workflow, you can create an agent that uses some tools. So that ArgillaCallbackHandler will keep track of the input and the output, but not about the intermediate steps/thoughts, so that given a prompt we log the original prompt and the final response to that given prompt.Note that for this scenario we'll be using Google Search API (Serp API) so you will need to both install google-search-results as pip install google-search-results, and to set the Serp API Key as os.environ["SERPAPI_API_KEY"] = "..." (you can find it at https://serpapi.com/dashboard), otherwise the example below won't work.from langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)tools = load_tools(["serpapi"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=callbacks,)agent.run("Who was the first president of the United States of America?") > Entering new AgentExecutor chain... I need to answer a historical question Action: Search Action Input: "who was the first president of the United States of America" Observation: George Washington Thought: George Washington was the first president Final Answer: George Washington was the first president of the |
2,640 | George Washington was the first president of the United States of America. > Finished chain. 'George Washington was the first president of the United States of America.'PreviousCallbacksNextConfidentInstallation and SetupGetting API CredentialsSetup ArgillaTrackingScenario 1: Tracking an LLMScenario 2: Tracking an LLM in a chainScenario 3: Using an Agent with ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Argilla - Open-source data platform for LLMs | Argilla - Open-source data platform for LLMs ->: George Washington was the first president of the United States of America. > Finished chain. 'George Washington was the first president of the United States of America.'PreviousCallbacksNextConfidentInstallation and SetupGetting API CredentialsSetup ArgillaTrackingScenario 1: Tracking an LLMScenario 2: Tracking an LLM in a chainScenario 3: Using an Agent with ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,641 | MongodDB | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryMongodDBOn this pageMongodDBMongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas.MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL). - WikipediaThis notebook goes over how to use Mongodb to store chat message history.Setting up‚Äãpip install pymongo# Provide the connection string to connect to the MongoDB databaseconnection_string = "mongodb://mongo_user:password123@mongo:27017"Example‚Äãfrom langchain.memory import MongoDBChatMessageHistorymessage_history = MongoDBChatMessageHistory( connection_string=connection_string, session_id="test-session")message_history.add_user_message("hi!")message_history.add_ai_message("whats up?")message_history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)]PreviousMomento CacheNextMot√∂rheadSetting upExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. | MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. ->: MongodDB | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryMongodDBOn this pageMongodDBMongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas.MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL). - WikipediaThis notebook goes over how to use Mongodb to store chat message history.Setting up‚Äãpip install pymongo# Provide the connection string to connect to the MongoDB databaseconnection_string = "mongodb://mongo_user:password123@mongo:27017"Example‚Äãfrom langchain.memory import MongoDBChatMessageHistorymessage_history = MongoDBChatMessageHistory( connection_string=connection_string, session_id="test-session")message_history.add_user_message("hi!")message_history.add_ai_message("whats up?")message_history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)]PreviousMomento CacheNextMot√∂rheadSetting upExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,642 | Postgres | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryPostgresPostgresPostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.This notebook goes over how to use Postgres to store chat message history.from langchain.memory import PostgresChatMessageHistoryhistory = PostgresChatMessageHistory( connection_string="postgresql://postgres:mypassword@localhost/chat_history", session_id="foo",)history.add_user_message("hi!")history.add_ai_message("whats up?")history.messagesPreviousMot√∂rheadNextRedisCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance. | PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance. ->: Postgres | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryPostgresPostgresPostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.This notebook goes over how to use Postgres to store chat message history.from langchain.memory import PostgresChatMessageHistoryhistory = PostgresChatMessageHistory( connection_string="postgresql://postgres:mypassword@localhost/chat_history", session_id="foo",)history.add_user_message("hi!")history.add_ai_message("whats up?")history.messagesPreviousMot√∂rheadNextRedisCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,643 | Rockset | ü¶úÔ∏èüîó Langchain | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. ->: Rockset | ü¶úÔ∏èüîó Langchain |
2,644 | Rockset | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryRocksetOn this pageRocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. This notebook goes over how to use Rockset to store chat message history. Setting up‚Äãpip install rocksetTo begin, with get your API key from the Rockset console. Find your API region for the Rockset API reference.Example‚Äãfrom langchain.memory.chat_message_histories import RocksetChatMessageHistoryfrom rockset import RocksetClient, Regionshistory = RocksetChatMessageHistory( session_id="MySession", client=RocksetClient( api_key="YOUR API KEY", host=Regions.usw2a1 # us-west-2 Oregon ), collection="langchain_demo", sync=True)history.add_user_message("hi!")history.add_ai_message("whats up?")print(history.messages)The output should be something like:[ HumanMessage(content='hi!', additional_kwargs={'id': '2e62f1c2-e9f7-465e-b551-49bae07fe9f0'}, example=False), AIMessage(content='whats up?', additional_kwargs={'id': 'b9be8eda-4c18-4cf8-81c3-e91e876927d0'}, example=False)]PreviousRemembrallNextSingleStoreDBSetting upExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. ->: Rockset | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryRocksetOn this pageRocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. This notebook goes over how to use Rockset to store chat message history. Setting up‚Äãpip install rocksetTo begin, with get your API key from the Rockset console. Find your API region for the Rockset API reference.Example‚Äãfrom langchain.memory.chat_message_histories import RocksetChatMessageHistoryfrom rockset import RocksetClient, Regionshistory = RocksetChatMessageHistory( session_id="MySession", client=RocksetClient( api_key="YOUR API KEY", host=Regions.usw2a1 # us-west-2 Oregon ), collection="langchain_demo", sync=True)history.add_user_message("hi!")history.add_ai_message("whats up?")print(history.messages)The output should be something like:[ HumanMessage(content='hi!', additional_kwargs={'id': '2e62f1c2-e9f7-465e-b551-49bae07fe9f0'}, example=False), AIMessage(content='whats up?', additional_kwargs={'id': 'b9be8eda-4c18-4cf8-81c3-e91e876927d0'}, example=False)]PreviousRemembrallNextSingleStoreDBSetting upExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,645 | Xata | ü¶úÔ∏èüîó Langchain | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. ->: Xata | ü¶úÔ∏èüîó Langchain |
2,646 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryXataOn this pageXataXata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions.This notebook covers:A simple example showing what XataChatMessageHistory does.A more complex example using a REACT agent that answer questions based on a knowledge based or documentation (stored in Xata as a vector store) and also having a long-term searchable history of its past messages (stored in Xata as a memory store)Setup‚ÄãCreate a database‚ÄãIn the Xata UI create a new database. You can name it whatever you want, in this notepad we'll use langchain. The Langchain integration can auto-create the table used for storying the memory, and this is what we'll use in this example. If you want to pre-create the table, ensure it has the right schema and set create_table to False when creating the class. Pre-creating the table saves one round-trip to the database during each session initialization.Let's first install our dependencies:pip install xata openai langchainNext, we need to get the environment variables for Xata. You can create a new API key by visiting your account settings. To find the database URL, go to the Settings page of the database that you have created. The database URL should look something like this: | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryXataOn this pageXataXata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions.This notebook covers:A simple example showing what XataChatMessageHistory does.A more complex example using a REACT agent that answer questions based on a knowledge based or documentation (stored in Xata as a vector store) and also having a long-term searchable history of its past messages (stored in Xata as a memory store)Setup‚ÄãCreate a database‚ÄãIn the Xata UI create a new database. You can name it whatever you want, in this notepad we'll use langchain. The Langchain integration can auto-create the table used for storying the memory, and this is what we'll use in this example. If you want to pre-create the table, ensure it has the right schema and set create_table to False when creating the class. Pre-creating the table saves one round-trip to the database during each session initialization.Let's first install our dependencies:pip install xata openai langchainNext, we need to get the environment variables for Xata. You can create a new API key by visiting your account settings. To find the database URL, go to the Settings page of the database that you have created. The database URL should look something like this: |
2,647 | The database URL should look something like this: https://demo-uni3q8.eu-west-1.xata.sh/db/langchain.import getpassapi_key = getpass.getpass("Xata API key: ")db_url = input("Xata database URL (copy it from your DB settings):")Create a simple memory store‚ÄãTo test the memory store functionality in isolation, let's use the following code snippet:from langchain.memory import XataChatMessageHistoryhistory = XataChatMessageHistory( session_id="session-1", api_key=api_key, db_url=db_url, table_name="memory")history.add_user_message("hi!")history.add_ai_message("whats up?")The above code creates a session with the ID session-1 and stores two messages in it. After running the above, if you visit the Xata UI, you should see a table named memory and the two messages added to it.You can retrieve the message history for a particular session with the following code:history.messagesConversational Q&A chain on your data with memory‚ÄãLet's now see a more complex example in which we combine OpenAI, the Xata Vector Store integration, and the Xata memory store integration to create a Q&A chat bot on your data, with follow-up questions and history.We're going to need to access the OpenAI API, so let's configure the API key:import osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")To store the documents that the chatbot will search for answers, add a table named docs to your langchain database using the Xata UI, and add the following columns:content of type "Text". This is used to store the Document.pageContent values.embedding of type "Vector". Use the dimension used by the model you plan to use. In this notebook we use OpenAI embeddings, which have 1536 dimensions.Let's create the vector store and add some sample docs to it:from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores.xata import XataVectorStoreembeddings = OpenAIEmbeddings()texts = [ "Xata is a Serverless Data platform based on PostgreSQL", "Xata offers a | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. ->: The database URL should look something like this: https://demo-uni3q8.eu-west-1.xata.sh/db/langchain.import getpassapi_key = getpass.getpass("Xata API key: ")db_url = input("Xata database URL (copy it from your DB settings):")Create a simple memory store‚ÄãTo test the memory store functionality in isolation, let's use the following code snippet:from langchain.memory import XataChatMessageHistoryhistory = XataChatMessageHistory( session_id="session-1", api_key=api_key, db_url=db_url, table_name="memory")history.add_user_message("hi!")history.add_ai_message("whats up?")The above code creates a session with the ID session-1 and stores two messages in it. After running the above, if you visit the Xata UI, you should see a table named memory and the two messages added to it.You can retrieve the message history for a particular session with the following code:history.messagesConversational Q&A chain on your data with memory‚ÄãLet's now see a more complex example in which we combine OpenAI, the Xata Vector Store integration, and the Xata memory store integration to create a Q&A chat bot on your data, with follow-up questions and history.We're going to need to access the OpenAI API, so let's configure the API key:import osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")To store the documents that the chatbot will search for answers, add a table named docs to your langchain database using the Xata UI, and add the following columns:content of type "Text". This is used to store the Document.pageContent values.embedding of type "Vector". Use the dimension used by the model you plan to use. In this notebook we use OpenAI embeddings, which have 1536 dimensions.Let's create the vector store and add some sample docs to it:from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores.xata import XataVectorStoreembeddings = OpenAIEmbeddings()texts = [ "Xata is a Serverless Data platform based on PostgreSQL", "Xata offers a |
2,648 | platform based on PostgreSQL", "Xata offers a built-in vector type that can be used to store and query vectors", "Xata includes similarity search"]vector_store = XataVectorStore.from_texts(texts, embeddings, api_key=api_key, db_url=db_url, table_name="docs")After running the above command, if you go to the Xata UI, you should see the documents loaded together with their embeddings in the docs table.Let's now create a ConversationBufferMemory to store the chat messages from both the user and the AI.from langchain.memory import ConversationBufferMemoryfrom uuid import uuid4chat_memory = XataChatMessageHistory( session_id=str(uuid4()), # needs to be unique per user session api_key=api_key, db_url=db_url, table_name="memory")memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=chat_memory, return_messages=True)Now it's time to create an Agent to use both the vector store and the chat memory together.from langchain.agents import initialize_agent, AgentTypefrom langchain.agents.agent_toolkits import create_retriever_toolfrom langchain.chat_models import ChatOpenAItool = create_retriever_tool( vector_store.as_retriever(), "search_docs", "Searches and returns documents from the Xata manual. Useful when you need to answer questions about Xata.")tools = [tool]llm = ChatOpenAI(temperature=0)agent = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)To test, let's tell the agent our name:agent.run(input="My name is bob")Now, let's now ask the agent some questions about Xata:agent.run(input="What is xata?")Notice that it answers based on the data stored in the document store. And now, let's ask a follow up question:agent.run(input="Does it support similarity search?")And now let's test its memory:agent.run(input="Did I tell you my name? What is it?")PreviousUpstash Redis Chat Message HistoryNextZepSetupCreate a databaseCreate a simple memory | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. ->: platform based on PostgreSQL", "Xata offers a built-in vector type that can be used to store and query vectors", "Xata includes similarity search"]vector_store = XataVectorStore.from_texts(texts, embeddings, api_key=api_key, db_url=db_url, table_name="docs")After running the above command, if you go to the Xata UI, you should see the documents loaded together with their embeddings in the docs table.Let's now create a ConversationBufferMemory to store the chat messages from both the user and the AI.from langchain.memory import ConversationBufferMemoryfrom uuid import uuid4chat_memory = XataChatMessageHistory( session_id=str(uuid4()), # needs to be unique per user session api_key=api_key, db_url=db_url, table_name="memory")memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=chat_memory, return_messages=True)Now it's time to create an Agent to use both the vector store and the chat memory together.from langchain.agents import initialize_agent, AgentTypefrom langchain.agents.agent_toolkits import create_retriever_toolfrom langchain.chat_models import ChatOpenAItool = create_retriever_tool( vector_store.as_retriever(), "search_docs", "Searches and returns documents from the Xata manual. Useful when you need to answer questions about Xata.")tools = [tool]llm = ChatOpenAI(temperature=0)agent = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)To test, let's tell the agent our name:agent.run(input="My name is bob")Now, let's now ask the agent some questions about Xata:agent.run(input="What is xata?")Notice that it answers based on the data stored in the document store. And now, let's ask a follow up question:agent.run(input="Does it support similarity search?")And now let's test its memory:agent.run(input="Did I tell you my name? What is it?")PreviousUpstash Redis Chat Message HistoryNextZepSetupCreate a databaseCreate a simple memory |
2,649 | a databaseCreate a simple memory storeConversational Q&A chain on your data with memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. | Xata is a serverless data platform, based on PostgreSQL and Elasticsearch. It provides a Python SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. ->: a databaseCreate a simple memory storeConversational Q&A chain on your data with memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,650 | Zep | ü¶úÔ∏èüîó Langchain | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: Zep | ü¶úÔ∏èüîó Langchain |
2,651 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryZepOn this pageZepZep is a long-term memory store for LLM applications.Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.Key Features:Fast! Zep‚Äôs async extractors operate independently of your chat loop, ensuring a snappy user experience.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Hybrid search over memories and metadata, with messages automatically embedded upon creation.Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project: https://github.com/getzep/zep
Docs: https://docs.getzep.com/Example‚ÄãThis notebook demonstrates how to use the Zep Long-term Memory Store as memory for your chatbot. | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryZepOn this pageZepZep is a long-term memory store for LLM applications.Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.Key Features:Fast! Zep‚Äôs async extractors operate independently of your chat loop, ensuring a snappy user experience.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Hybrid search over memories and metadata, with messages automatically embedded upon creation.Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project: https://github.com/getzep/zep
Docs: https://docs.getzep.com/Example‚ÄãThis notebook demonstrates how to use the Zep Long-term Memory Store as memory for your chatbot. |
2,652 | REACT Agent Chat Message History with Zep - A long-term memory store for LLM applications.We'll demonstrate:Adding conversation history to the Zep memory store.Running an agent and having message automatically added to the store.Viewing the enriched messages.Vector search over the conversation history.from langchain.memory import ZepMemoryfrom langchain.retrievers import ZepRetrieverfrom langchain.llms import OpenAIfrom langchain.schema import HumanMessage, AIMessagefrom langchain.utilities import WikipediaAPIWrapperfrom langchain.agents import initialize_agent, AgentType, Toolfrom uuid import uuid4# Set this to your Zep server URLZEP_API_URL = "http://localhost:8000"session_id = str(uuid4()) # This is a unique identifier for the user# Provide your OpenAI keyimport getpassopenai_key = getpass.getpass()# Provide your Zep API key. Note that this is optional. See https://docs.getzep.com/deployment/authzep_api_key = getpass.getpass()Initialize the Zep Chat Message History Class and initialize the Agent​search = WikipediaAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to search online for answers. You should ask targeted questions", ),]# Set up Zep Chat Historymemory = ZepMemory( session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key, memory_key="chat_history",)# Initialize the agentllm = OpenAI(temperature=0, openai_api_key=openai_key)agent_chain = initialize_agent( tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory,)Add some history data​# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: REACT Agent Chat Message History with Zep - A long-term memory store for LLM applications.We'll demonstrate:Adding conversation history to the Zep memory store.Running an agent and having message automatically added to the store.Viewing the enriched messages.Vector search over the conversation history.from langchain.memory import ZepMemoryfrom langchain.retrievers import ZepRetrieverfrom langchain.llms import OpenAIfrom langchain.schema import HumanMessage, AIMessagefrom langchain.utilities import WikipediaAPIWrapperfrom langchain.agents import initialize_agent, AgentType, Toolfrom uuid import uuid4# Set this to your Zep server URLZEP_API_URL = "http://localhost:8000"session_id = str(uuid4()) # This is a unique identifier for the user# Provide your OpenAI keyimport getpassopenai_key = getpass.getpass()# Provide your Zep API key. Note that this is optional. See https://docs.getzep.com/deployment/authzep_api_key = getpass.getpass()Initialize the Zep Chat Message History Class and initialize the Agent​search = WikipediaAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to search online for answers. You should ask targeted questions", ),]# Set up Zep Chat Historymemory = ZepMemory( session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key, memory_key="chat_history",)# Initialize the agentllm = OpenAI(temperature=0, openai_api_key=openai_key)agent_chain = initialize_agent( tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory,)Add some history data​# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction |
2,653 | was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), "metadata": {"foo": "bar"}, },]for msg in test_history: memory.chat_memory.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]), metadata=msg.get("metadata", {}), )Run the agent‚ÄãDoing so will automatically add the input and response to the Zep memory.agent_chain.run( input="What is the | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), "metadata": {"foo": "bar"}, },]for msg in test_history: memory.chat_memory.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]), metadata=msg.get("metadata", {}), )Run the agent‚ÄãDoing so will automatically add the input and response to the Zep memory.agent_chain.run( input="What is the |
2,654 | Zep memory.agent_chain.run( input="What is the book's relevance to the challenges facing contemporary society?",) > Entering new chain... Thought: Do I need to use a tool? No AI: Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them. > Finished chain. 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them.'Inspect the Zep memory‚ÄãNote the summary, and that the history has been enriched with token counts, UUIDs, and timestamps.Summaries are biased towards the most recent messages.def print_messages(messages): for m in messages: print(m.type, ":\n", m.dict())print(memory.chat_memory.zep_summary)print("\n")print_messages(memory.chat_memory.messages) The human inquires about Octavia Butler. The AI identifies her as an American science fiction author. The human then asks which books of hers were made into movies. The AI responds by mentioning the FX series Kindred, based on her novel of the same name. The human then asks about her contemporaries, and the AI lists Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ. system : {'content': 'The human inquires about Octavia Butler. The AI identifies her as an American science fiction author. The human then asks which books of hers were made into movies. The AI responds by mentioning the FX series Kindred, based on her novel of the same name. The human then asks about her contemporaries, and the AI lists Ursula K. Le Guin, Samuel | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: Zep memory.agent_chain.run( input="What is the book's relevance to the challenges facing contemporary society?",) > Entering new chain... Thought: Do I need to use a tool? No AI: Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them. > Finished chain. 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them.'Inspect the Zep memory‚ÄãNote the summary, and that the history has been enriched with token counts, UUIDs, and timestamps.Summaries are biased towards the most recent messages.def print_messages(messages): for m in messages: print(m.type, ":\n", m.dict())print(memory.chat_memory.zep_summary)print("\n")print_messages(memory.chat_memory.messages) The human inquires about Octavia Butler. The AI identifies her as an American science fiction author. The human then asks which books of hers were made into movies. The AI responds by mentioning the FX series Kindred, based on her novel of the same name. The human then asks about her contemporaries, and the AI lists Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ. system : {'content': 'The human inquires about Octavia Butler. The AI identifies her as an American science fiction author. The human then asks which books of hers were made into movies. The AI responds by mentioning the FX series Kindred, based on her novel of the same name. The human then asks about her contemporaries, and the AI lists Ursula K. Le Guin, Samuel |
2,655 | and the AI lists Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.', 'additional_kwargs': {}} human : {'content': 'What awards did she win?', 'additional_kwargs': {'uuid': '6b733f0b-6778-49ae-b3ec-4e077c039f31', 'created_at': '2023-07-09T19:23:16.611232Z', 'token_count': 8, 'metadata': {'system': {'entities': [], 'intent': 'The subject is inquiring about the awards that someone, whose identity is not specified, has won.'}}}, 'example': False} ai : {'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'additional_kwargs': {'uuid': '2f6d80c6-3c08-4fd4-8d4e-7bbee341ac90', 'created_at': '2023-07-09T19:23:16.618947Z', 'token_count': 21, 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 14, 'Start': 0, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 33, 'Start': 19, 'Text': 'the Hugo Award'}], 'Name': 'the Hugo Award'}, {'Label': 'EVENT', 'Matches': [{'End': 81, 'Start': 57, 'Text': 'the MacArthur Fellowship'}], 'Name': 'the MacArthur Fellowship'}], 'intent': 'The subject is stating that Octavia Butler received the Hugo Award, the Nebula Award, and the MacArthur Fellowship.'}}}, 'example': False} human : {'content': 'Which other women sci-fi writers might I want to read?', 'additional_kwargs': {'uuid': 'ccdcc901-ea39-4981-862f-6fe22ab9289b', 'created_at': '2023-07-09T19:23:16.62678Z', 'token_count': 14, 'metadata': {'system': {'entities': [], 'intent': 'The subject is seeking recommendations for additional women science fiction writers to explore.'}}}, 'example': False} ai : {'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'additional_kwargs': {'uuid': '7977099a-0c62-4c98-bfff-465bbab6c9c3', 'created_at': '2023-07-09T19:23:16.631721Z', 'token_count': 18, 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: and the AI lists Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.', 'additional_kwargs': {}} human : {'content': 'What awards did she win?', 'additional_kwargs': {'uuid': '6b733f0b-6778-49ae-b3ec-4e077c039f31', 'created_at': '2023-07-09T19:23:16.611232Z', 'token_count': 8, 'metadata': {'system': {'entities': [], 'intent': 'The subject is inquiring about the awards that someone, whose identity is not specified, has won.'}}}, 'example': False} ai : {'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'additional_kwargs': {'uuid': '2f6d80c6-3c08-4fd4-8d4e-7bbee341ac90', 'created_at': '2023-07-09T19:23:16.618947Z', 'token_count': 21, 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 14, 'Start': 0, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 33, 'Start': 19, 'Text': 'the Hugo Award'}], 'Name': 'the Hugo Award'}, {'Label': 'EVENT', 'Matches': [{'End': 81, 'Start': 57, 'Text': 'the MacArthur Fellowship'}], 'Name': 'the MacArthur Fellowship'}], 'intent': 'The subject is stating that Octavia Butler received the Hugo Award, the Nebula Award, and the MacArthur Fellowship.'}}}, 'example': False} human : {'content': 'Which other women sci-fi writers might I want to read?', 'additional_kwargs': {'uuid': 'ccdcc901-ea39-4981-862f-6fe22ab9289b', 'created_at': '2023-07-09T19:23:16.62678Z', 'token_count': 14, 'metadata': {'system': {'entities': [], 'intent': 'The subject is seeking recommendations for additional women science fiction writers to explore.'}}}, 'example': False} ai : {'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'additional_kwargs': {'uuid': '7977099a-0c62-4c98-bfff-465bbab6c9c3', 'created_at': '2023-07-09T19:23:16.631721Z', 'token_count': 18, 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le |
2,656 | 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is suggesting that the person should consider reading the works of Ursula K. Le Guin or Joanna Russ.'}}}, 'example': False} human : {'content': "Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", 'additional_kwargs': {'uuid': 'e439b7e6-286a-4278-a8cb-dc260fa2e089', 'created_at': '2023-07-09T19:23:16.63623Z', 'token_count': 23, 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}], 'intent': 'The subject is requesting a brief summary or explanation of the book "Parable of the Sower" by Butler.'}}}, 'example': False} ai : {'content': 'Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', 'additional_kwargs': {'uuid': '6760489b-19c9-41aa-8b45-fae6cb1d7ee6', 'created_at': '2023-07-09T19:23:16.647524Z', 'token_count': 56, 'metadata': {'foo': 'bar', 'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}], 'intent': 'The subject is providing information about the novel "Parable of the Sower" by Octavia Butler, including its genre, publication date, and a brief summary of the | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is suggesting that the person should consider reading the works of Ursula K. Le Guin or Joanna Russ.'}}}, 'example': False} human : {'content': "Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", 'additional_kwargs': {'uuid': 'e439b7e6-286a-4278-a8cb-dc260fa2e089', 'created_at': '2023-07-09T19:23:16.63623Z', 'token_count': 23, 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}], 'intent': 'The subject is requesting a brief summary or explanation of the book "Parable of the Sower" by Butler.'}}}, 'example': False} ai : {'content': 'Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', 'additional_kwargs': {'uuid': '6760489b-19c9-41aa-8b45-fae6cb1d7ee6', 'created_at': '2023-07-09T19:23:16.647524Z', 'token_count': 56, 'metadata': {'foo': 'bar', 'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}], 'intent': 'The subject is providing information about the novel "Parable of the Sower" by Octavia Butler, including its genre, publication date, and a brief summary of the |
2,657 | publication date, and a brief summary of the plot.'}}}, 'example': False} human : {'content': "What is the book's relevance to the challenges facing contemporary society?", 'additional_kwargs': {'uuid': '7dbbbb93-492b-4739-800f-cad2b6e0e764', 'created_at': '2023-07-09T19:23:19.315182Z', 'token_count': 15, 'metadata': {'system': {'entities': [], 'intent': 'The subject is asking about the relevance of a book to the challenges currently faced by society.'}}}, 'example': False} ai : {'content': 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them.', 'additional_kwargs': {'uuid': '3e14ac8f-b7c1-4360-958b-9f3eae1f784f', 'created_at': '2023-07-09T19:23:19.332517Z', 'token_count': 66, 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}], 'intent': 'The subject is providing an analysis and evaluation of the novel "Parable of the Sower" and highlighting its relevance to contemporary societal challenges.'}}}, 'example': False}Vector search over the Zep memory‚ÄãZep provides native vector search over historical conversation memory via the ZepRetriever.You can use the ZepRetriever with chains that support passing in a Langchain Retriever object.retriever = ZepRetriever( session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key,)search_results = memory.chat_memory.search("who are some famous women sci-fi authors?")for r in search_results: if r.dist > 0.8: # Only print results with similarity of 0.8 or higher print(r.message, r.dist) {'uuid': 'ccdcc901-ea39-4981-862f-6fe22ab9289b', 'created_at': '2023-07-09T19:23:16.62678Z', 'role': 'human', 'content': 'Which other women sci-fi writers might I want | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: publication date, and a brief summary of the plot.'}}}, 'example': False} human : {'content': "What is the book's relevance to the challenges facing contemporary society?", 'additional_kwargs': {'uuid': '7dbbbb93-492b-4739-800f-cad2b6e0e764', 'created_at': '2023-07-09T19:23:19.315182Z', 'token_count': 15, 'metadata': {'system': {'entities': [], 'intent': 'The subject is asking about the relevance of a book to the challenges currently faced by society.'}}}, 'example': False} ai : {'content': 'Parable of the Sower is a prescient novel that speaks to the challenges facing contemporary society, such as climate change, inequality, and violence. It is a cautionary tale that warns of the dangers of unchecked greed and the need for individuals to take responsibility for their own lives and the lives of those around them.', 'additional_kwargs': {'uuid': '3e14ac8f-b7c1-4360-958b-9f3eae1f784f', 'created_at': '2023-07-09T19:23:19.332517Z', 'token_count': 66, 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}], 'intent': 'The subject is providing an analysis and evaluation of the novel "Parable of the Sower" and highlighting its relevance to contemporary societal challenges.'}}}, 'example': False}Vector search over the Zep memory‚ÄãZep provides native vector search over historical conversation memory via the ZepRetriever.You can use the ZepRetriever with chains that support passing in a Langchain Retriever object.retriever = ZepRetriever( session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key,)search_results = memory.chat_memory.search("who are some famous women sci-fi authors?")for r in search_results: if r.dist > 0.8: # Only print results with similarity of 0.8 or higher print(r.message, r.dist) {'uuid': 'ccdcc901-ea39-4981-862f-6fe22ab9289b', 'created_at': '2023-07-09T19:23:16.62678Z', 'role': 'human', 'content': 'Which other women sci-fi writers might I want |
2,658 | 'Which other women sci-fi writers might I want to read?', 'metadata': {'system': {'entities': [], 'intent': 'The subject is seeking recommendations for additional women science fiction writers to explore.'}}, 'token_count': 14} 0.9119619869747062 {'uuid': '7977099a-0c62-4c98-bfff-465bbab6c9c3', 'created_at': '2023-07-09T19:23:16.631721Z', 'role': 'ai', 'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is suggesting that the person should consider reading the works of Ursula K. Le Guin or Joanna Russ.'}}, 'token_count': 18} 0.8534346954749745 {'uuid': 'b05e2eb5-c103-4973-9458-928726f08655', 'created_at': '2023-07-09T19:23:16.603098Z', 'role': 'ai', 'content': "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': "The subject is stating that Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ."}}, 'token_count': 27} 0.8523831524040919 {'uuid': 'e346f02b-f854-435d-b6ba-fb394a416b9b', 'created_at': '2023-07-09T19:23:16.556587Z', 'role': 'human', 'content': 'Who was Octavia Butler?', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: 'Which other women sci-fi writers might I want to read?', 'metadata': {'system': {'entities': [], 'intent': 'The subject is seeking recommendations for additional women science fiction writers to explore.'}}, 'token_count': 14} 0.9119619869747062 {'uuid': '7977099a-0c62-4c98-bfff-465bbab6c9c3', 'created_at': '2023-07-09T19:23:16.631721Z', 'role': 'ai', 'content': 'You might want to read Ursula K. Le Guin or Joanna Russ.', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is suggesting that the person should consider reading the works of Ursula K. Le Guin or Joanna Russ.'}}, 'token_count': 18} 0.8534346954749745 {'uuid': 'b05e2eb5-c103-4973-9458-928726f08655', 'created_at': '2023-07-09T19:23:16.603098Z', 'role': 'ai', 'content': "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': "The subject is stating that Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ."}}, 'token_count': 27} 0.8523831524040919 {'uuid': 'e346f02b-f854-435d-b6ba-fb394a416b9b', 'created_at': '2023-07-09T19:23:16.556587Z', 'role': 'human', 'content': 'Who was Octavia Butler?', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': |
2,659 | 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject is asking for information about the identity or background of Octavia Butler.'}}, 'token_count': 8} 0.8236355436055457 {'uuid': '42ff41d2-c63a-4d5b-b19b-d9a87105cfc3', 'created_at': '2023-07-09T19:23:16.578022Z', 'role': 'ai', 'content': 'Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 0, 'Text': 'Octavia Estelle Butler'}], 'Name': 'Octavia Estelle Butler'}, {'Label': 'DATE', 'Matches': [{'End': 37, 'Start': 24, 'Text': 'June 22, 1947'}], 'Name': 'June 22, 1947'}, {'Label': 'DATE', 'Matches': [{'End': 57, 'Start': 40, 'Text': 'February 24, 2006'}], 'Name': 'February 24, 2006'}, {'Label': 'NORP', 'Matches': [{'End': 74, 'Start': 66, 'Text': 'American'}], 'Name': 'American'}], 'intent': 'The subject is providing information about Octavia Estelle Butler, who was an American science fiction author.'}}, 'token_count': 31} 0.8206687242257686 {'uuid': '2f6d80c6-3c08-4fd4-8d4e-7bbee341ac90', 'created_at': '2023-07-09T19:23:16.618947Z', 'role': 'ai', 'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 14, 'Start': 0, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 33, 'Start': 19, 'Text': 'the Hugo Award'}], 'Name': 'the Hugo Award'}, {'Label': 'EVENT', 'Matches': [{'End': 81, 'Start': 57, 'Text': 'the MacArthur Fellowship'}], 'Name': 'the MacArthur Fellowship'}], 'intent': 'The subject is stating that Octavia Butler received the Hugo Award, the Nebula Award, and the MacArthur Fellowship.'}}, 'token_count': 21} 0.8199012397683285PreviousXataNextCallbacksExampleInitialize the Zep Chat Message History Class and initialize the AgentAdd some | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject is asking for information about the identity or background of Octavia Butler.'}}, 'token_count': 8} 0.8236355436055457 {'uuid': '42ff41d2-c63a-4d5b-b19b-d9a87105cfc3', 'created_at': '2023-07-09T19:23:16.578022Z', 'role': 'ai', 'content': 'Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 0, 'Text': 'Octavia Estelle Butler'}], 'Name': 'Octavia Estelle Butler'}, {'Label': 'DATE', 'Matches': [{'End': 37, 'Start': 24, 'Text': 'June 22, 1947'}], 'Name': 'June 22, 1947'}, {'Label': 'DATE', 'Matches': [{'End': 57, 'Start': 40, 'Text': 'February 24, 2006'}], 'Name': 'February 24, 2006'}, {'Label': 'NORP', 'Matches': [{'End': 74, 'Start': 66, 'Text': 'American'}], 'Name': 'American'}], 'intent': 'The subject is providing information about Octavia Estelle Butler, who was an American science fiction author.'}}, 'token_count': 31} 0.8206687242257686 {'uuid': '2f6d80c6-3c08-4fd4-8d4e-7bbee341ac90', 'created_at': '2023-07-09T19:23:16.618947Z', 'role': 'ai', 'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 14, 'Start': 0, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 33, 'Start': 19, 'Text': 'the Hugo Award'}], 'Name': 'the Hugo Award'}, {'Label': 'EVENT', 'Matches': [{'End': 81, 'Start': 57, 'Text': 'the MacArthur Fellowship'}], 'Name': 'the MacArthur Fellowship'}], 'intent': 'The subject is stating that Octavia Butler received the Hugo Award, the Nebula Award, and the MacArthur Fellowship.'}}, 'token_count': 21} 0.8199012397683285PreviousXataNextCallbacksExampleInitialize the Zep Chat Message History Class and initialize the AgentAdd some |
2,660 | History Class and initialize the AgentAdd some history dataRun the agentInspect the Zep memoryVector search over the Zep memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Zep is a long-term memory store for LLM applications. | Zep is a long-term memory store for LLM applications. ->: History Class and initialize the AgentAdd some history dataRun the agentInspect the Zep memoryVector search over the Zep memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,661 | SageMaker | ü¶úÔ∏èüîó Langchain | Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. | Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. ->: SageMaker | ü¶úÔ∏èüîó Langchain |
2,662 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSageMakerSageMakerLet's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom inference.py script:Change fromreturn {"vectors": sentence_embeddings[0].tolist()}to:return {"vectors": sentence_embeddings.tolist()}.pip3 install langchain boto3from typing import Dict, Listfrom langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandlerimport jsonclass ContentHandler(EmbeddingsContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes: """ Transforms the input into bytes that can be consumed by SageMaker endpoint. Args: inputs: List of input strings. model_kwargs: Additional keyword arguments to be passed to the endpoint. Returns: The transformed bytes input. """ # Example: inference.py expects a JSON string with a "inputs" key: input_str | Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. | Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSageMakerSageMakerLet's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom inference.py script:Change fromreturn {"vectors": sentence_embeddings[0].tolist()}to:return {"vectors": sentence_embeddings.tolist()}.pip3 install langchain boto3from typing import Dict, Listfrom langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandlerimport jsonclass ContentHandler(EmbeddingsContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes: """ Transforms the input into bytes that can be consumed by SageMaker endpoint. Args: inputs: List of input strings. model_kwargs: Additional keyword arguments to be passed to the endpoint. Returns: The transformed bytes input. """ # Example: inference.py expects a JSON string with a "inputs" key: input_str |
2,663 | JSON string with a "inputs" key: input_str = json.dumps({"inputs": inputs, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> List[List[float]]: """ Transforms the bytes output from the endpoint into a list of embeddings. Args: output: The bytes output from SageMaker endpoint. Returns: The transformed output - list of embeddings Note: The length of the outer list is the number of input strings. The length of the inner lists is the embedding dimension. """ # Example: inference.py returns a JSON string with the list of # embeddings in a "vectors" key: response_json = json.loads(output.read().decode("utf-8")) return response_json["vectors"]content_handler = ContentHandler()embeddings = SagemakerEndpointEmbeddings( # credentials_profile_name="credentials-profile-name", endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834", region_name="us-east-1", content_handler=content_handler,)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousOpenAINextSelf HostedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. | Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. ->: JSON string with a "inputs" key: input_str = json.dumps({"inputs": inputs, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> List[List[float]]: """ Transforms the bytes output from the endpoint into a list of embeddings. Args: output: The bytes output from SageMaker endpoint. Returns: The transformed output - list of embeddings Note: The length of the outer list is the number of input strings. The length of the inner lists is the embedding dimension. """ # Example: inference.py returns a JSON string with the list of # embeddings in a "vectors" key: response_json = json.loads(output.read().decode("utf-8")) return response_json["vectors"]content_handler = ContentHandler()embeddings = SagemakerEndpointEmbeddings( # credentials_profile_name="credentials-profile-name", endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834", region_name="us-east-1", content_handler=content_handler,)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousOpenAINextSelf HostedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,664 | GCP Vertex AI | ü¶úÔ∏èüîó Langchain | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. ->: GCP Vertex AI | ü¶úÔ∏èüîó Langchain |
2,665 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsGCP Vertex AIOn this pageGCP Vertex AINote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. By default, Google Cloud does not use Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install langchain google-cloud-aiplatformfrom langchain.chat_models import ChatVertexAIfrom langchain.prompts import ChatPromptTemplatechat = ChatVertexAI()system = "You are a helpful assistant who translate English to French"human = "Translate this | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsGCP Vertex AIOn this pageGCP Vertex AINote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. By default, Google Cloud does not use Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install langchain google-cloud-aiplatformfrom langchain.chat_models import ChatVertexAIfrom langchain.prompts import ChatPromptTemplatechat = ChatVertexAI()system = "You are a helpful assistant who translate English to French"human = "Translate this |
2,666 | English to French"human = "Translate this sentence from English to French. I love programming."prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", human)])messages = prompt.format_messages()chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)If we want to construct a simple chain that takes user specified parameters:system = "You are a helpful assistant that translates {input_language} to {output_language}."human = "{text}"prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", human)])chain = prompt | chatchain.invoke( {"input_language": "English", "output_language": "Japanese", "text": "I love programming"}) AIMessage(content=' 私はプログラミングが大好きです。', additional_kwargs={}, example=False)Code generation chat models​You can now leverage the Codey API for code chat within Vertex AI. The model name is:codechat-bison: for code assistancechat = ChatVertexAI( model_name="codechat-bison", max_output_tokens=1000, temperature=0.5)# For simple string in string out usage, we can use the `predict` method:print(chat.predict("Write a Python function to identify all prime numbers")) ```python def is_prime(x): if (x <= 1): return False for i in range(2, x): if (x % i == 0): return False return True ```Asynchronous calls​We can make asynchronous calls via the agenerate and ainvoke methods.import asyncio# import nest_asyncio# nest_asyncio.apply()chat = ChatVertexAI( model_name="chat-bison", max_output_tokens=1000, temperature=0.7, top_p=0.95, top_k=40,)asyncio.run(chat.agenerate([messages])) LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False))]], llm_output={}, | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. ->: English to French"human = "Translate this sentence from English to French. I love programming."prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", human)])messages = prompt.format_messages()chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)If we want to construct a simple chain that takes user specified parameters:system = "You are a helpful assistant that translates {input_language} to {output_language}."human = "{text}"prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", human)])chain = prompt | chatchain.invoke( {"input_language": "English", "output_language": "Japanese", "text": "I love programming"}) AIMessage(content=' 私はプログラミングが大好きです。', additional_kwargs={}, example=False)Code generation chat models​You can now leverage the Codey API for code chat within Vertex AI. The model name is:codechat-bison: for code assistancechat = ChatVertexAI( model_name="codechat-bison", max_output_tokens=1000, temperature=0.5)# For simple string in string out usage, we can use the `predict` method:print(chat.predict("Write a Python function to identify all prime numbers")) ```python def is_prime(x): if (x <= 1): return False for i in range(2, x): if (x % i == 0): return False return True ```Asynchronous calls​We can make asynchronous calls via the agenerate and ainvoke methods.import asyncio# import nest_asyncio# nest_asyncio.apply()chat = ChatVertexAI( model_name="chat-bison", max_output_tokens=1000, temperature=0.7, top_p=0.95, top_k=40,)asyncio.run(chat.agenerate([messages])) LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False))]], llm_output={}, |
2,667 | example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('223599ef-38f8-4c79-ac6d-a5013060eb9d'))])asyncio.run(chain.ainvoke({"input_language": "English", "output_language": "Sanskrit", "text": "I love programming"})) AIMessage(content=' अहं प्रोग्रामिंग प्रेमामि', additional_kwargs={}, example=False)Streaming calls​We can also stream outputs via the stream method:import sysprompt = ChatPromptTemplate.from_messages([("human", "List out the 15 most populous countries in the world")])messages = prompt.format_messages()for chunk in chat.stream(messages): sys.stdout.write(chunk.content) sys.stdout.flush() 1. China (1,444,216,107) 2. India (1,393,409,038) 3. United States (332,403,650) 4. Indonesia (273,523,615) 5. Pakistan (220,892,340) 6. Brazil (212,559,409) 7. Nigeria (206,139,589) 8. Bangladesh (164,689,383) 9. Russia (145,934,462) 10. Mexico (128,932,488) 11. Japan (126,476,461) 12. Ethiopia (115,063,982) 13. Philippines (109,581,078) 14. Egypt (102,334,404) 15. Vietnam (97,338,589)PreviousFireworksNextJinaChatCode generation chat modelsAsynchronous callsStreaming callsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. | Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. ->: example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('223599ef-38f8-4c79-ac6d-a5013060eb9d'))])asyncio.run(chain.ainvoke({"input_language": "English", "output_language": "Sanskrit", "text": "I love programming"})) AIMessage(content=' अहं प्रोग्रामिंग प्रेमामि', additional_kwargs={}, example=False)Streaming calls​We can also stream outputs via the stream method:import sysprompt = ChatPromptTemplate.from_messages([("human", "List out the 15 most populous countries in the world")])messages = prompt.format_messages()for chunk in chat.stream(messages): sys.stdout.write(chunk.content) sys.stdout.flush() 1. China (1,444,216,107) 2. India (1,393,409,038) 3. United States (332,403,650) 4. Indonesia (273,523,615) 5. Pakistan (220,892,340) 6. Brazil (212,559,409) 7. Nigeria (206,139,589) 8. Bangladesh (164,689,383) 9. Russia (145,934,462) 10. Mexico (128,932,488) 11. Japan (126,476,461) 12. Ethiopia (115,063,982) 13. Philippines (109,581,078) 14. Egypt (102,334,404) 15. Vietnam (97,338,589)PreviousFireworksNextJinaChatCode generation chat modelsAsynchronous callsStreaming callsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,668 | Tongyi Qwen | ü¶úÔ∏èüîó Langchain | Tongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. | Tongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. ->: Tongyi Qwen | ü¶úÔ∏èüîó Langchain |
2,669 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsTongyi QwenTongyi QwenTongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.
In this notebook, we will introduce how to use langchain with Tongyi mainly in Chat corresponding | Tongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. | Tongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsTongyi QwenTongyi QwenTongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.
In this notebook, we will introduce how to use langchain with Tongyi mainly in Chat corresponding |
2,670 | to the package langchain/chat_models in langchain# Install the packagepip install dashscope# Get a new token: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0from getpass import getpassDASHSCOPE_API_KEY = getpass() ········import osos.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEYfrom langchain.chat_models.tongyi import ChatTongyifrom langchain.schema import HumanMessagechatLLM = ChatTongyi( streaming=True,)res = chatLLM.stream([HumanMessage(content="hi")], streaming=True)for r in res: print("chat resp:", r) chat resp: content='Hello! How' additional_kwargs={} example=False chat resp: content=' can I assist you today?' additional_kwargs={} example=Falsefrom langchain.schema import AIMessage, HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chatLLM(messages) AIMessageChunk(content="J'aime programmer.", additional_kwargs={}, example=False)PreviousPromptLayer ChatOpenAINextvLLM ChatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Tongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. | Tongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. ->: to the package langchain/chat_models in langchain# Install the packagepip install dashscope# Get a new token: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0from getpass import getpassDASHSCOPE_API_KEY = getpass() ········import osos.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEYfrom langchain.chat_models.tongyi import ChatTongyifrom langchain.schema import HumanMessagechatLLM = ChatTongyi( streaming=True,)res = chatLLM.stream([HumanMessage(content="hi")], streaming=True)for r in res: print("chat resp:", r) chat resp: content='Hello! How' additional_kwargs={} example=False chat resp: content=' can I assist you today?' additional_kwargs={} example=Falsefrom langchain.schema import AIMessage, HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chatLLM(messages) AIMessageChunk(content="J'aime programmer.", additional_kwargs={}, example=False)PreviousPromptLayer ChatOpenAINextvLLM ChatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,671 | JinaChat | ü¶úÔ∏èüîó Langchain | This notebook covers how to get started with JinaChat chat models. | This notebook covers how to get started with JinaChat chat models. ->: JinaChat | ü¶úÔ∏èüîó Langchain |
2,672 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsJinaChatJinaChatThis notebook covers how to get started with JinaChat chat models.from langchain.chat_models import JinaChatfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = JinaChat(temperature=0)messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = | This notebook covers how to get started with JinaChat chat models. | This notebook covers how to get started with JinaChat chat models. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsJinaChatJinaChatThis notebook covers how to get started with JinaChat chat models.from langchain.chat_models import JinaChatfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = JinaChat(temperature=0)messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = |
2,673 | to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming." ).to_messages()) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)PreviousGCP Vertex AINextKonkoCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook covers how to get started with JinaChat chat models. | This notebook covers how to get started with JinaChat chat models. ->: to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming." ).to_messages()) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)PreviousGCP Vertex AINextKonkoCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,674 | vLLM Chat | ü¶úÔ∏èüîó Langchain | vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API. | vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API. ->: vLLM Chat | ü¶úÔ∏èüîó Langchain |
2,675 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsvLLM ChatvLLM ChatvLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.This notebook covers how to get started with vLLM chat models using langchain's ChatOpenAI as it is.from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessageinference_server_url = "http://localhost:8000/v1"chat = ChatOpenAI( model="mosaicml/mpt-7b", openai_api_key="EMPTY", openai_api_base=inference_server_url, max_tokens=5, temperature=0,)messages = [ SystemMessage( content="You are a helpful assistant that translates English to Italian." ), HumanMessage( content="Translate the following sentence from English to Italian: I love programming." ),]chat(messages) AIMessage(content=' Io amo programmare', additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string | vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API. | vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsvLLM ChatvLLM ChatvLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.This notebook covers how to get started with vLLM chat models using langchain's ChatOpenAI as it is.from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessageinference_server_url = "http://localhost:8000/v1"chat = ChatOpenAI( model="mosaicml/mpt-7b", openai_api_key="EMPTY", openai_api_base=inference_server_url, max_tokens=5, temperature=0,)messages = [ SystemMessage( content="You are a helpful assistant that translates English to Italian." ), HumanMessage( content="Translate the following sentence from English to Italian: I love programming." ),]chat(messages) AIMessage(content=' Io amo programmare', additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string |
2,676 | a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="Italian", text="I love programming." ).to_messages()) AIMessage(content=' I love programming too.', additional_kwargs={}, example=False)PreviousTongyi QwenNextYandexGPTCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API. | vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API. ->: a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="Italian", text="I love programming." ).to_messages()) AIMessage(content=' I love programming too.', additional_kwargs={}, example=False)PreviousTongyi QwenNextYandexGPTCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,677 | Activeloop Deep Lake | ü¶úÔ∏èüîó Langchain | This page covers how to use the Deep Lake ecosystem within LangChain. | This page covers how to use the Deep Lake ecosystem within LangChain. ->: Activeloop Deep Lake | ü¶úÔ∏èüîó Langchain |
2,678 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the Deep Lake ecosystem within LangChain. | This page covers how to use the Deep Lake ecosystem within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
2,679 | and toolkitsMemoryCallbacksChat loadersProvidersMoreActiveloop Deep LakeOn this pageActiveloop Deep LakeThis page covers how to use the Deep Lake ecosystem within LangChain.Why Deep Lake?‚ÄãMore than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.Not only stores embeddings, but also the original data with automatic version control.Truly serverless. Doesn't require another service and can be used with major cloud providers (AWS S3, GCS, etc.)Activeloop Deep Lake supports SelfQuery Retrieval: | This page covers how to use the Deep Lake ecosystem within LangChain. | This page covers how to use the Deep Lake ecosystem within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreActiveloop Deep LakeOn this pageActiveloop Deep LakeThis page covers how to use the Deep Lake ecosystem within LangChain.Why Deep Lake?‚ÄãMore than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.Not only stores embeddings, but also the original data with automatic version control.Truly serverless. Doesn't require another service and can be used with major cloud providers (AWS S3, GCS, etc.)Activeloop Deep Lake supports SelfQuery Retrieval: |
2,680 | Activeloop Deep Lake Self Query RetrievalMore Resources​Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial DataTwitter the-algorithm codebase analysis with Deep LakeCode UnderstandingHere is whitepaper and academic paper for Deep LakeHere is a set of additional resources available for review: Deep Lake, Get started and TutorialsInstallation and Setup​Install the Python package with pip install deeplakeWrappers​VectorStore​There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vector store (for now), whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import DeepLakeFor a more detailed walkthrough of the Deep Lake wrapper, see this notebookPreviousOpenAINextAI21 LabsWhy Deep Lake?More ResourcesInstallation and SetupWrappersVectorStoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the Deep Lake ecosystem within LangChain. | This page covers how to use the Deep Lake ecosystem within LangChain. ->: Activeloop Deep Lake Self Query RetrievalMore Resources​Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial DataTwitter the-algorithm codebase analysis with Deep LakeCode UnderstandingHere is whitepaper and academic paper for Deep LakeHere is a set of additional resources available for review: Deep Lake, Get started and TutorialsInstallation and Setup​Install the Python package with pip install deeplakeWrappers​VectorStore​There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vector store (for now), whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import DeepLakeFor a more detailed walkthrough of the Deep Lake wrapper, see this notebookPreviousOpenAINextAI21 LabsWhy Deep Lake?More ResourcesInstallation and SetupWrappersVectorStoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,681 | WhyLabs | ü¶úÔ∏èüîó Langchain | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: ->: WhyLabs | ü¶úÔ∏èüîó Langchain |
2,682 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
2,683 | and toolkitsMemoryCallbacksChat loadersProvidersMoreWhyLabsOn this pageWhyLabsWhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to:Set up in minutes: Begin generating statistical profiles of any dataset using whylogs, the lightweight open-source library.Upload dataset profiles to the WhyLabs platform for centralized and customizable monitoring/alerting of dataset features as well as model inputs, outputs, and performance.Integrate seamlessly: interoperable with any data pipeline, ML infrastructure, or framework. Generate real-time insights into your existing data flow. See more about our integrations here.Scale to terabytes: handle your large-scale data, keeping compute requirements low. Integrate with either batch or streaming data pipelines.Maintain data privacy: WhyLabs relies statistical profiles created via whylogs so your actual data never leaves your environment! | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreWhyLabsOn this pageWhyLabsWhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to:Set up in minutes: Begin generating statistical profiles of any dataset using whylogs, the lightweight open-source library.Upload dataset profiles to the WhyLabs platform for centralized and customizable monitoring/alerting of dataset features as well as model inputs, outputs, and performance.Integrate seamlessly: interoperable with any data pipeline, ML infrastructure, or framework. Generate real-time insights into your existing data flow. See more about our integrations here.Scale to terabytes: handle your large-scale data, keeping compute requirements low. Integrate with either batch or streaming data pipelines.Maintain data privacy: WhyLabs relies statistical profiles created via whylogs so your actual data never leaves your environment! |
2,684 | Enable observability to detect inputs and LLM issues faster, deliver continuous improvements, and avoid costly incidents.Installation and Setup‚Äã%pip install langkit openai langchainMake sure to set the required API keys and config required to send telemetry to WhyLabs:WhyLabs API Key: https://whylabs.ai/whylabs-free-sign-upOrg and Dataset https://docs.whylabs.ai/docs/whylabs-onboardingOpenAI: https://platform.openai.com/account/api-keysThen you can set them like this:import osos.environ["OPENAI_API_KEY"] = ""os.environ["WHYLABS_DEFAULT_ORG_ID"] = ""os.environ["WHYLABS_DEFAULT_DATASET_ID"] = ""os.environ["WHYLABS_API_KEY"] = ""Note: the callback supports directly passing in these variables to the callback, when no auth is directly passed in it will default to the environment. Passing in auth directly allows for writing profiles to multiple projects or organizations in WhyLabs.Callbacks‚ÄãHere's a single LLM integration with OpenAI, which will log various out of the box metrics and send telemetry to WhyLabs for monitoring.from langchain.callbacks import WhyLabsCallbackHandlerfrom langchain.llms import OpenAIwhylabs = WhyLabsCallbackHandler.from_params()llm = OpenAI(temperature=0, callbacks=[whylabs])result = llm.generate(["Hello, World!"])print(result) generations=[[Generation(text="\n\nMy name is John and I'm excited to learn more about programming.", generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 20, 'prompt_tokens': 4, 'completion_tokens': 16}, 'model_name': 'text-davinci-003'}result = llm.generate( [ "Can you give me 3 SSNs so I can understand the format?", "Can you give me 3 fake email addresses?", "Can you give me 3 fake US mailing addresses?", ])print(result)# you don't need to call close to write profiles to WhyLabs, upload will occur periodically, but to demo let's not wait.whylabs.close() generations=[[Generation(text='\n\n1. 123-45-6789\n2. 987-65-4321\n3. | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: ->: Enable observability to detect inputs and LLM issues faster, deliver continuous improvements, and avoid costly incidents.Installation and Setup‚Äã%pip install langkit openai langchainMake sure to set the required API keys and config required to send telemetry to WhyLabs:WhyLabs API Key: https://whylabs.ai/whylabs-free-sign-upOrg and Dataset https://docs.whylabs.ai/docs/whylabs-onboardingOpenAI: https://platform.openai.com/account/api-keysThen you can set them like this:import osos.environ["OPENAI_API_KEY"] = ""os.environ["WHYLABS_DEFAULT_ORG_ID"] = ""os.environ["WHYLABS_DEFAULT_DATASET_ID"] = ""os.environ["WHYLABS_API_KEY"] = ""Note: the callback supports directly passing in these variables to the callback, when no auth is directly passed in it will default to the environment. Passing in auth directly allows for writing profiles to multiple projects or organizations in WhyLabs.Callbacks‚ÄãHere's a single LLM integration with OpenAI, which will log various out of the box metrics and send telemetry to WhyLabs for monitoring.from langchain.callbacks import WhyLabsCallbackHandlerfrom langchain.llms import OpenAIwhylabs = WhyLabsCallbackHandler.from_params()llm = OpenAI(temperature=0, callbacks=[whylabs])result = llm.generate(["Hello, World!"])print(result) generations=[[Generation(text="\n\nMy name is John and I'm excited to learn more about programming.", generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 20, 'prompt_tokens': 4, 'completion_tokens': 16}, 'model_name': 'text-davinci-003'}result = llm.generate( [ "Can you give me 3 SSNs so I can understand the format?", "Can you give me 3 fake email addresses?", "Can you give me 3 fake US mailing addresses?", ])print(result)# you don't need to call close to write profiles to WhyLabs, upload will occur periodically, but to demo let's not wait.whylabs.close() generations=[[Generation(text='\n\n1. 123-45-6789\n2. 987-65-4321\n3. |
2,685 | 123-45-6789\n2. 987-65-4321\n3. 456-78-9012', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. [email protected]\n2. [email protected]\n3. [email protected]', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. 123 Main Street, Anytown, USA 12345\n2. 456 Elm Street, Nowhere, USA 54321\n3. 789 Pine Avenue, Somewhere, USA 98765', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 137, 'prompt_tokens': 33, 'completion_tokens': 104}, 'model_name': 'text-davinci-003'}PreviousWhatsAppNextWikipediaInstallation and SetupCallbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: | WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: ->: 123-45-6789\n2. 987-65-4321\n3. 456-78-9012', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. [email protected]\n2. [email protected]\n3. [email protected]', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. 123 Main Street, Anytown, USA 12345\n2. 456 Elm Street, Nowhere, USA 54321\n3. 789 Pine Avenue, Somewhere, USA 98765', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 137, 'prompt_tokens': 33, 'completion_tokens': 104}, 'model_name': 'text-davinci-003'}PreviousWhatsAppNextWikipediaInstallation and SetupCallbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,686 | AwaDB | ü¶úÔ∏èüîó Langchain | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ->: AwaDB | ü¶úÔ∏èüîó Langchain |
2,687 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
2,688 | and toolkitsMemoryCallbacksChat loadersProvidersMoreAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.Installation and Setup​pip install awadbVector Store​from langchain.vectorstores import AwaDBSee a usage example.Text Embedding Model​from langchain.embeddings import AwaEmbeddingsSee a usage example.PreviousAtlasNextAWS DynamoDBInstallation and SetupVector StoreText Embedding ModelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.Installation and Setup​pip install awadbVector Store​from langchain.vectorstores import AwaDBSee a usage example.Text Embedding Model​from langchain.embeddings import AwaEmbeddingsSee a usage example.PreviousAtlasNextAWS DynamoDBInstallation and SetupVector StoreText Embedding ModelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,689 | Cassandra | ü¶úÔ∏èüîó Langchain | Apache Cassandra¬Æ is a free and open-source, distributed, wide-column | Apache Cassandra¬Æ is a free and open-source, distributed, wide-column ->: Cassandra | ü¶úÔ∏èüîó Langchain |
2,690 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Apache Cassandra¬Æ is a free and open-source, distributed, wide-column | Apache Cassandra¬Æ is a free and open-source, distributed, wide-column ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
2,691 | and toolkitsMemoryCallbacksChat loadersProvidersMoreCassandraOn this pageCassandraApache Cassandra® is a free and open-source, distributed, wide-column | Apache Cassandra® is a free and open-source, distributed, wide-column | Apache Cassandra® is a free and open-source, distributed, wide-column ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreCassandraOn this pageCassandraApache Cassandra® is a free and open-source, distributed, wide-column |
2,692 | store, NoSQL database management system designed to handle large amounts of data across many commodity servers,
providing high availability with no single point of failure. Cassandra offers support for clusters spanning
multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
Cassandra was designed to implement a combination of Amazon's Dynamo distributed storage and replication
techniques combined with Google's Bigtable data and storage engine model.Installation and Setup​pip install cassandra-driverpip install cassioVector Store​See a usage example.from langchain.vectorstores import CassandraMemory​See a usage example.from langchain.memory import CassandraChatMessageHistoryPreviousBrave SearchNextCerebriumAIInstallation and SetupVector StoreMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Apache Cassandra® is a free and open-source, distributed, wide-column | Apache Cassandra® is a free and open-source, distributed, wide-column ->: store, NoSQL database management system designed to handle large amounts of data across many commodity servers,
providing high availability with no single point of failure. Cassandra offers support for clusters spanning
multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
Cassandra was designed to implement a combination of Amazon's Dynamo distributed storage and replication
techniques combined with Google's Bigtable data and storage engine model.Installation and Setup​pip install cassandra-driverpip install cassioVector Store​See a usage example.from langchain.vectorstores import CassandraMemory​See a usage example.from langchain.memory import CassandraChatMessageHistoryPreviousBrave SearchNextCerebriumAIInstallation and SetupVector StoreMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,693 | Trello | ü¶úÔ∏èüîó Langchain | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. ->: Trello | ü¶úÔ∏èüîó Langchain |
2,694 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
2,695 | and toolkitsMemoryCallbacksChat loadersProvidersMoreTrelloOn this pageTrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreTrelloOn this pageTrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. |
2,696 | The TrelloLoader allows us to load cards from a Trello board.Installation and Setup​pip install py-trello beautifulsoup4See setup instructions.Document Loader​See a usage example.from langchain.document_loaders import TrelloLoaderPrevious2MarkdownNextTruLensInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. | Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. ->: The TrelloLoader allows us to load cards from a Trello board.Installation and Setup​pip install py-trello beautifulsoup4See setup instructions.Document Loader​See a usage example.from langchain.document_loaders import TrelloLoaderPrevious2MarkdownNextTruLensInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,697 | NLPCloud | ü¶úÔ∏èüîó Langchain | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. ->: NLPCloud | ü¶úÔ∏èüîó Langchain |
2,698 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
2,699 | and toolkitsMemoryCallbacksChat loadersProvidersMoreNLPCloudOn this pageNLPCloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. Installation and Setup​Install the nlpcloud package.pip install nlpcloudGet an NLPCloud api key and set it as an environment variable (NLPCLOUD_API_KEY)LLM​See a usage example.from langchain.llms import NLPCloudText Embedding Models​See a usage examplefrom langchain.embeddings import NLPCloudEmbeddingsPreviousNeo4jNextNotion DBInstallation and SetupLLMText Embedding ModelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreNLPCloudOn this pageNLPCloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. Installation and Setup​Install the nlpcloud package.pip install nlpcloudGet an NLPCloud api key and set it as an environment variable (NLPCLOUD_API_KEY)LLM​See a usage example.from langchain.llms import NLPCloudText Embedding Models​See a usage examplefrom langchain.embeddings import NLPCloudEmbeddingsPreviousNeo4jNextNotion DBInstallation and SetupLLMText Embedding ModelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
Subsets and Splits