Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
2,300
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSelf HostedSelf HostedLet's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings,)import runhouse as rh# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},# name='my-cluster')embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)text = "This is a test document."query_result = embeddings.embed_query(text)And similarly for SelfHostedHuggingFaceInstructEmbeddings:embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)Now let's load an embedding model with a custom load function:def get_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Must be inside the function in notebooks model_id =
Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.
Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSelf HostedSelf HostedLet's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings,)import runhouse as rh# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},# name='my-cluster')embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)text = "This is a test document."query_result = embeddings.embed_query(text)And similarly for SelfHostedHuggingFaceInstructEmbeddings:embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)Now let's load an embedding model with a custom load function:def get_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Must be inside the function in notebooks model_id =
2,301
be inside the function in notebooks model_id = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer)def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[0][-1] for emb in pipeline(prompt)] return pipeline(prompt)[0][-1]embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu, model_reqs=["./", "torch", "transformers"], inference_fn=inference_fn,)query_result = embeddings.embed_query(text)PreviousSageMakerNextSentence TransformersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.
Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. ->: be inside the function in notebooks model_id = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer)def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[0][-1] for emb in pipeline(prompt)] return pipeline(prompt)[0][-1]embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu, model_reqs=["./", "torch", "transformers"], inference_fn=inference_fn,)query_result = embeddings.embed_query(text)PreviousSageMakerNextSentence TransformersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,302
InstructEmbeddings | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsInstructEmbeddingsInstructEmbeddingsLet's load the HuggingFace instruct Embeddings class.from langchain.embeddings import HuggingFaceInstructEmbeddingsembeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: ") load INSTRUCTOR_Transformer max_seq_length 512text = "This is a test document."query_result = embeddings.embed_query(text)PreviousHugging FaceNextJinaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Let's load the HuggingFace instruct Embeddings class.
Let's load the HuggingFace instruct Embeddings class. ->: InstructEmbeddings | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsInstructEmbeddingsInstructEmbeddingsLet's load the HuggingFace instruct Embeddings class.from langchain.embeddings import HuggingFaceInstructEmbeddingsembeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: ") load INSTRUCTOR_Transformer max_seq_length 512text = "This is a test document."query_result = embeddings.embed_query(text)PreviousHugging FaceNextJinaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,303
ModelScope | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsModelScopeModelScopeModelScope is big repository of the models and datasets.Let's load the ModelScope Embedding class.from langchain.embeddings import ModelScopeEmbeddingsmodel_id = "damo/nlp_corom_sentence-embedding_english-base"embeddings = ModelScopeEmbeddings(model_id=model_id)text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])PreviousMiniMaxNextMosaicMLCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
ModelScope is big repository of the models and datasets.
ModelScope is big repository of the models and datasets. ->: ModelScope | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsModelScopeModelScopeModelScope is big repository of the models and datasets.Let's load the ModelScope Embedding class.from langchain.embeddings import ModelScopeEmbeddingsmodel_id = "damo/nlp_corom_sentence-embedding_english-base"embeddings = ModelScopeEmbeddings(model_id=model_id)text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])PreviousMiniMaxNextMosaicMLCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,304
AwaDB | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.This notebook explains how to use AwaEmbeddings in LangChain.# pip install awadbimport the library​from langchain.embeddings import AwaEmbeddingsEmbedding = AwaEmbeddings()Set embedding modelUsers can use Embedding.set_model() to specify the embedding model. \ The input of this function is a string which represents the model's name. \ The list of currently supported models can be obtained here \ \ The default model is all-mpnet-base-v2, it can be used without setting.text = "our embedding test"Embedding.set_model("all-mpnet-base-v2")res_query = Embedding.embed_query("The test information")res_document = Embedding.embed_documents(["test1", "another test"])PreviousAleph AlphaNextAzureOpenAIimport the libraryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.
AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ->: AwaDB | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.This notebook explains how to use AwaEmbeddings in LangChain.# pip install awadbimport the library​from langchain.embeddings import AwaEmbeddingsEmbedding = AwaEmbeddings()Set embedding modelUsers can use Embedding.set_model() to specify the embedding model. \ The input of this function is a string which represents the model's name. \ The list of currently supported models can be obtained here \ \ The default model is all-mpnet-base-v2, it can be used without setting.text = "our embedding test"Embedding.set_model("all-mpnet-base-v2")res_query = Embedding.embed_query("The test information")res_document = Embedding.embed_documents(["test1", "another test"])PreviousAleph AlphaNextAzureOpenAIimport the libraryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,305
TensorflowHub | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsTensorflowHubTensorflowHubLet's load the TensorflowHub Embedding class.from langchain.embeddings import TensorflowHubEmbeddingsembeddings = TensorflowHubEmbeddings() 2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-01-30 23:53:34.362802: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousSpaCyNextXorbits inference (Xinference)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Let's load the TensorflowHub Embedding class.
Let's load the TensorflowHub Embedding class. ->: TensorflowHub | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsTensorflowHubTensorflowHubLet's load the TensorflowHub Embedding class.from langchain.embeddings import TensorflowHubEmbeddingsembeddings = TensorflowHubEmbeddings() 2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-01-30 23:53:34.362802: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousSpaCyNextXorbits inference (Xinference)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,306
GPT4All | 🦜️🔗 Langchain
GPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.
GPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc. ->: GPT4All | 🦜️🔗 Langchain
2,307
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsGPT4AllOn this pageGPT4AllGPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.This notebook explains how to use GPT4All embeddings with LangChain.Install GPT4All's Python Bindings​%pip install gpt4all > /dev/nullNote: you may need to restart the kernel to use updated packages.from langchain.embeddings import GPT4AllEmbeddingsgpt4all_embd = GPT4AllEmbeddings() 100%|████████████████████████| 45.5M/45.5M [00:02<00:00, 18.5MiB/s] Model downloaded at: /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin objc[45711]: Class GGMLMetalClass is implemented in both /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x29fe18208) and /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x2a0244208). One of the two will be used. Which one is undefined.text = "This is a test document."Embed the Textual Data​query_result = gpt4all_embd.embed_query(text)With embed_documents you can embed multiple pieces of
GPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.
GPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsGPT4AllOn this pageGPT4AllGPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.This notebook explains how to use GPT4All embeddings with LangChain.Install GPT4All's Python Bindings​%pip install gpt4all > /dev/nullNote: you may need to restart the kernel to use updated packages.from langchain.embeddings import GPT4AllEmbeddingsgpt4all_embd = GPT4AllEmbeddings() 100%|████████████████████████| 45.5M/45.5M [00:02<00:00, 18.5MiB/s] Model downloaded at: /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin objc[45711]: Class GGMLMetalClass is implemented in both /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x29fe18208) and /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x2a0244208). One of the two will be used. Which one is undefined.text = "This is a test document."Embed the Textual Data​query_result = gpt4all_embd.embed_query(text)With embed_documents you can embed multiple pieces of
2,308
embed_documents you can embed multiple pieces of text. You can also map these embeddings with Nomic's Atlas to see a visual representation of your data.doc_result = gpt4all_embd.embed_documents([text])PreviousGoogle Vertex AI PaLMNextGradientInstall GPT4All's Python BindingsEmbed the Textual DataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
GPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.
GPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc. ->: embed_documents you can embed multiple pieces of text. You can also map these embeddings with Nomic's Atlas to see a visual representation of your data.doc_result = gpt4all_embd.embed_documents([text])PreviousGoogle Vertex AI PaLMNextGradientInstall GPT4All's Python BindingsEmbed the Textual DataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,309
Xorbits inference (Xinference) | 🦜️🔗 Langchain
This notebook goes over how to use Xinference embeddings within LangChain
This notebook goes over how to use Xinference embeddings within LangChain ->: Xorbits inference (Xinference) | 🦜️🔗 Langchain
2,310
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsXorbits inference (Xinference)On this pageXorbits inference (Xinference)This notebook goes over how to use Xinference embeddings within LangChainInstallation​Install Xinference through PyPI:%pip install "xinference[all]"Deploy Xinference Locally or in a Distributed Cluster.​For local deployment, run xinference. To deploy Xinference in a cluster, first start an Xinference supervisor using the xinference-supervisor. You can also use the option -p to specify the port and -H to specify the host. The default port is 9997.Then, start the Xinference workers using xinference-worker on each server you want to run them on. You can consult the README file from Xinference for more information.Wrapper​To use Xinference with LangChain, you need to first launch a model. You can use command line interface (CLI) to do so:xinference launch -n vicuna-v1.3 -f ggmlv3 -q q4_0 Model uid: 915845ee-2a04-11ee-8ed4-d29396a3f064A model UID is returned for you to use. Now you can use Xinference embeddings with LangChain:from langchain.embeddings import XinferenceEmbeddingsxinference = XinferenceEmbeddings( server_url="http://0.0.0.0:9997", model_uid = "915845ee-2a04-11ee-8ed4-d29396a3f064")query_result = xinference.embed_query("This is a test
This notebook goes over how to use Xinference embeddings within LangChain
This notebook goes over how to use Xinference embeddings within LangChain ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsXorbits inference (Xinference)On this pageXorbits inference (Xinference)This notebook goes over how to use Xinference embeddings within LangChainInstallation​Install Xinference through PyPI:%pip install "xinference[all]"Deploy Xinference Locally or in a Distributed Cluster.​For local deployment, run xinference. To deploy Xinference in a cluster, first start an Xinference supervisor using the xinference-supervisor. You can also use the option -p to specify the port and -H to specify the host. The default port is 9997.Then, start the Xinference workers using xinference-worker on each server you want to run them on. You can consult the README file from Xinference for more information.Wrapper​To use Xinference with LangChain, you need to first launch a model. You can use command line interface (CLI) to do so:xinference launch -n vicuna-v1.3 -f ggmlv3 -q q4_0 Model uid: 915845ee-2a04-11ee-8ed4-d29396a3f064A model UID is returned for you to use. Now you can use Xinference embeddings with LangChain:from langchain.embeddings import XinferenceEmbeddingsxinference = XinferenceEmbeddings( server_url="http://0.0.0.0:9997", model_uid = "915845ee-2a04-11ee-8ed4-d29396a3f064")query_result = xinference.embed_query("This is a test
2,311
= xinference.embed_query("This is a test query")doc_result = xinference.embed_documents(["text A", "text B"])Lastly, terminate the model when you do not need to use it:xinference terminate --model-uid "915845ee-2a04-11ee-8ed4-d29396a3f064"PreviousTensorflowHubNextVector storesInstallationDeploy Xinference Locally or in a Distributed Cluster.WrapperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to use Xinference embeddings within LangChain
This notebook goes over how to use Xinference embeddings within LangChain ->: = xinference.embed_query("This is a test query")doc_result = xinference.embed_documents(["text A", "text B"])Lastly, terminate the model when you do not need to use it:xinference terminate --model-uid "915845ee-2a04-11ee-8ed4-d29396a3f064"PreviousTensorflowHubNextVector storesInstallationDeploy Xinference Locally or in a Distributed Cluster.WrapperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,312
DeepInfra | 🦜️🔗 Langchain
DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.
DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings. ->: DeepInfra | 🦜️🔗 Langchain
2,313
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsDeepInfraDeepInfraDeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.# sign up for an account: https://deepinfra.com/login?utm_source=langchainfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········import osos.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENfrom langchain.embeddings import DeepInfraEmbeddingsembeddings = DeepInfraEmbeddings( model_id="sentence-transformers/clip-ViT-B-32", query_instruction="", embed_instruction="",)docs = ["Dog is not a cat", "Beta is the second letter of Greek alphabet"]document_result = embeddings.embed_documents(docs)query = "What is the first letter of Greek alphabet"query_result = embeddings.embed_query(query)import numpy as npquery_numpy = np.array(query_result)for doc_res, doc in zip(document_result, docs): document_numpy = np.array(doc_res) similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy) ) print(f'Cosine similarity between "{doc}" and query: {similarity}') Cosine similarity between "Dog is not a cat" and query: 0.7489097144129355
DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.
DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsDeepInfraDeepInfraDeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.# sign up for an account: https://deepinfra.com/login?utm_source=langchainfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········import osos.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENfrom langchain.embeddings import DeepInfraEmbeddingsembeddings = DeepInfraEmbeddings( model_id="sentence-transformers/clip-ViT-B-32", query_instruction="", embed_instruction="",)docs = ["Dog is not a cat", "Beta is the second letter of Greek alphabet"]document_result = embeddings.embed_documents(docs)query = "What is the first letter of Greek alphabet"query_result = embeddings.embed_query(query)import numpy as npquery_numpy = np.array(query_result)for doc_res, doc in zip(document_result, docs): document_numpy = np.array(doc_res) similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy) ) print(f'Cosine similarity between "{doc}" and query: {similarity}') Cosine similarity between "Dog is not a cat" and query: 0.7489097144129355
2,314
is not a cat" and query: 0.7489097144129355 Cosine similarity between "Beta is the second letter of Greek alphabet" and query: 0.9519380640702013PreviousDashScopeNextEDEN AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.
DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings. ->: is not a cat" and query: 0.7489097144129355 Cosine similarity between "Beta is the second letter of Greek alphabet" and query: 0.9519380640702013PreviousDashScopeNextEDEN AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,315
Ollama | 🦜️🔗 Langchain
Let's load the Ollama Embeddings class.
Let's load the Ollama Embeddings class. ->: Ollama | 🦜️🔗 Langchain
2,316
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsOllamaOllamaLet's load the Ollama Embeddings class.from langchain.embeddings import OllamaEmbeddingsembeddings = OllamaEmbeddings()text = "This is a test document."To generate embeddings, you can either query an invidivual text, or you can query a list of texts.query_result = embeddings.embed_query(text)query_result[:5] [-0.09996652603149414, 0.015568195842206478, 0.17670190334320068, 0.16521021723747253, 0.21193109452724457]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.04242777079343796, 0.016536075621843338, 0.10052520781755447, 0.18272875249385834, 0.2079043835401535]Let's load the Ollama Embeddings class with smaller model (e.g. llama:7b). Note: See other supported models https://ollama.ai/libraryembeddings = OllamaEmbeddings(model="llama2:7b")text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [-0.09996627271175385, 0.015567859634757042, 0.17670205235481262, 0.16521376371383667, 0.21193283796310425]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.042427532374858856, 0.01653730869293213, 0.10052604228258133, 0.18272635340690613, 0.20790338516235352]PreviousNLP
Let's load the Ollama Embeddings class.
Let's load the Ollama Embeddings class. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsOllamaOllamaLet's load the Ollama Embeddings class.from langchain.embeddings import OllamaEmbeddingsembeddings = OllamaEmbeddings()text = "This is a test document."To generate embeddings, you can either query an invidivual text, or you can query a list of texts.query_result = embeddings.embed_query(text)query_result[:5] [-0.09996652603149414, 0.015568195842206478, 0.17670190334320068, 0.16521021723747253, 0.21193109452724457]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.04242777079343796, 0.016536075621843338, 0.10052520781755447, 0.18272875249385834, 0.2079043835401535]Let's load the Ollama Embeddings class with smaller model (e.g. llama:7b). Note: See other supported models https://ollama.ai/libraryembeddings = OllamaEmbeddings(model="llama2:7b")text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [-0.09996627271175385, 0.015567859634757042, 0.17670205235481262, 0.16521376371383667, 0.21193283796310425]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.042427532374858856, 0.01653730869293213, 0.10052604228258133, 0.18272635340690613, 0.20790338516235352]PreviousNLP
2,317
0.20790338516235352]PreviousNLP CloudNextOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Let's load the Ollama Embeddings class.
Let's load the Ollama Embeddings class. ->: 0.20790338516235352]PreviousNLP CloudNextOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,318
Hugging Face | 🦜️🔗 Langchain
Let's load the Hugging Face Embedding class.
Let's load the Hugging Face Embedding class. ->: Hugging Face | 🦜️🔗 Langchain
2,319
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsHugging FaceOn this pageHugging FaceLet's load the Hugging Face Embedding class.pip install langchain sentence_transformersfrom langchain.embeddings import HuggingFaceEmbeddingsembeddings = HuggingFaceEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:3] [-0.04895168915390968, -0.03986193612217903, -0.021562768146395683]doc_result = embeddings.embed_documents([text])Hugging Face Inference API​We can also access embedding models via the Hugging Face Inference API, which does not require us to install sentence_transformers and download models locally.import getpassinference_api_key = getpass.getpass("Enter your HF Inference API Key:\n\n") Enter your HF Inference API Key: ········from langchain.embeddings import HuggingFaceInferenceAPIEmbeddingsembeddings = HuggingFaceInferenceAPIEmbeddings( api_key=inference_api_key, model_name="sentence-transformers/all-MiniLM-l6-v2")query_result = embeddings.embed_query(text)query_result[:3] [-0.038338541984558105, 0.1234646737575531, -0.028642963618040085]PreviousGradientNextInstructEmbeddingsHugging Face Inference APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Let's load the Hugging Face Embedding class.
Let's load the Hugging Face Embedding class. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsHugging FaceOn this pageHugging FaceLet's load the Hugging Face Embedding class.pip install langchain sentence_transformersfrom langchain.embeddings import HuggingFaceEmbeddingsembeddings = HuggingFaceEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:3] [-0.04895168915390968, -0.03986193612217903, -0.021562768146395683]doc_result = embeddings.embed_documents([text])Hugging Face Inference API​We can also access embedding models via the Hugging Face Inference API, which does not require us to install sentence_transformers and download models locally.import getpassinference_api_key = getpass.getpass("Enter your HF Inference API Key:\n\n") Enter your HF Inference API Key: ········from langchain.embeddings import HuggingFaceInferenceAPIEmbeddingsembeddings = HuggingFaceInferenceAPIEmbeddings( api_key=inference_api_key, model_name="sentence-transformers/all-MiniLM-l6-v2")query_result = embeddings.embed_query(text)query_result[:3] [-0.038338541984558105, 0.1234646737575531, -0.028642963618040085]PreviousGradientNextInstructEmbeddingsHugging Face Inference APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,320
Fake Embeddings | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsFake EmbeddingsFake EmbeddingsLangChain also provides a fake embedding class. You can use this to test your pipelines.from langchain.embeddings import FakeEmbeddingsembeddings = FakeEmbeddings(size=1352)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])PreviousERNIE Embedding-V1NextGoogle Vertex AI PaLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
LangChain also provides a fake embedding class. You can use this to test your pipelines.
LangChain also provides a fake embedding class. You can use this to test your pipelines. ->: Fake Embeddings | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsFake EmbeddingsFake EmbeddingsLangChain also provides a fake embedding class. You can use this to test your pipelines.from langchain.embeddings import FakeEmbeddingsembeddings = FakeEmbeddings(size=1352)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])PreviousERNIE Embedding-V1NextGoogle Vertex AI PaLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,321
BGE on Hugging Face | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsBGE on Hugging FaceBGE on Hugging FaceBGE models on the HuggingFace are the best open-source embedding models. BGE model is created by the Beijing Academy of Artificial Intelligence (BAAI). BAAI is a private non-profit organization engaged in AI research and development.This notebook shows how to use BGE Embeddings through Hugging Face#!pip install sentence_transformersfrom langchain.embeddings import HuggingFaceBgeEmbeddingsmodel_name = "BAAI/bge-small-en"model_kwargs = {'device': 'cpu'}encode_kwargs = {'normalize_embeddings': True}hf = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs)embedding = hf.embed_query("hi this is harrison")len(embedding) 384PreviousBedrockNextClarifaiCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
BGE models on the HuggingFace are the best open-source embedding models.
BGE models on the HuggingFace are the best open-source embedding models. ->: BGE on Hugging Face | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsBGE on Hugging FaceBGE on Hugging FaceBGE models on the HuggingFace are the best open-source embedding models. BGE model is created by the Beijing Academy of Artificial Intelligence (BAAI). BAAI is a private non-profit organization engaged in AI research and development.This notebook shows how to use BGE Embeddings through Hugging Face#!pip install sentence_transformersfrom langchain.embeddings import HuggingFaceBgeEmbeddingsmodel_name = "BAAI/bge-small-en"model_kwargs = {'device': 'cpu'}encode_kwargs = {'normalize_embeddings': True}hf = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs)embedding = hf.embed_query("hi this is harrison")len(embedding) 384PreviousBedrockNextClarifaiCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,322
MiniMax | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsMiniMaxMiniMaxMiniMax offers an embeddings service.This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.import osos.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"from langchain.embeddings import MiniMaxEmbeddingsembeddings = MiniMaxEmbeddings()query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}") Cosine similarity between document and query: 0.1573236279277012PreviousLocalAINextModelScopeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
MiniMax offers an embeddings service.
MiniMax offers an embeddings service. ->: MiniMax | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsMiniMaxMiniMaxMiniMax offers an embeddings service.This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.import osos.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"from langchain.embeddings import MiniMaxEmbeddingsembeddings = MiniMaxEmbeddings()query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}") Cosine similarity between document and query: 0.1573236279277012PreviousLocalAINextModelScopeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,323
Clarifai | 🦜️🔗 Langchain
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. ->: Clarifai | 🦜️🔗 Langchain
2,324
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsClarifaiClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.This example goes over how to use LangChain to interact with Clarifai models. Text embedding models in particular can be found here.To use Clarifai, you must have an account and a Personal Access Token (PAT) key.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsClarifaiClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.This example goes over how to use LangChain to interact with Clarifai models. Text embedding models in particular can be found here.To use Clarifai, you must have an account and a Personal Access Token (PAT) key.
2,325
Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.embeddings import ClarifaiEmbeddingsfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainInputCreate a prompt template to be used with the LLM Chain:template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])SetupSet the user id and app id to the application in which the model resides. You can find a list of public models on https://clarifai.com/explore/modelsYou will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.USER_ID = "salesforce"APP_ID = "blip"MODEL_ID = "multimodal-embedder-blip-2"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# Initialize a Clarifai embedding modelembeddings = ClarifaiEmbeddings( pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousBGE on Hugging FaceNextCohereCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. ->: Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.embeddings import ClarifaiEmbeddingsfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainInputCreate a prompt template to be used with the LLM Chain:template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])SetupSet the user id and app id to the application in which the model resides. You can find a list of public models on https://clarifai.com/explore/modelsYou will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.USER_ID = "salesforce"APP_ID = "blip"MODEL_ID = "multimodal-embedder-blip-2"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# Initialize a Clarifai embedding modelembeddings = ClarifaiEmbeddings( pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousBGE on Hugging FaceNextCohereCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,326
Llama-cpp | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsLlama-cppLlama-cppThis notebook goes over how to use Llama-cpp embeddings within LangChainpip install llama-cpp-pythonfrom langchain.embeddings import LlamaCppEmbeddingsllama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")text = "This is a test document."query_result = llama.embed_query(text)doc_result = llama.embed_documents([text])PreviousJinaNextLLMRailsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to use Llama-cpp embeddings within LangChain
This notebook goes over how to use Llama-cpp embeddings within LangChain ->: Llama-cpp | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsLlama-cppLlama-cppThis notebook goes over how to use Llama-cpp embeddings within LangChainpip install llama-cpp-pythonfrom langchain.embeddings import LlamaCppEmbeddingsllama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")text = "This is a test document."query_result = llama.embed_query(text)doc_result = llama.embed_documents([text])PreviousJinaNextLLMRailsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,327
EDEN AI | 🦜️🔗 Langchain
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/) ->: EDEN AI | 🦜️🔗 Langchain
2,328
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsEDEN AIOn this pageEDEN AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)This example goes over how to use LangChain to interact with Eden AI embedding modelsAccessing the EDENAI's API requires an API key, which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settingsOnce we have a key we'll want to set it as an environment variable by running:export EDENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter when initiating the EdenAI embedding class:from langchain.embeddings.edenai import EdenAiEmbeddingsembeddings = EdenAiEmbeddings(edenai_api_key="...",provider="...")Calling a model​The EdenAI API brings together various providers.To access a specific model, you can simply use the "provider" when
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/) ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsEDEN AIOn this pageEDEN AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)This example goes over how to use LangChain to interact with Eden AI embedding modelsAccessing the EDENAI's API requires an API key, which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settingsOnce we have a key we'll want to set it as an environment variable by running:export EDENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter when initiating the EdenAI embedding class:from langchain.embeddings.edenai import EdenAiEmbeddingsembeddings = EdenAiEmbeddings(edenai_api_key="...",provider="...")Calling a model​The EdenAI API brings together various providers.To access a specific model, you can simply use the "provider" when
2,329
model, you can simply use the "provider" when calling.embeddings = EdenAiEmbeddings(provider="openai")docs = ["It's raining right now", "cats are cute"]document_result = embeddings.embed_documents(docs)query = "my umbrella is broken"query_result = embeddings.embed_query(query)import numpy as npquery_numpy = np.array(query_result)for doc_res, doc in zip(document_result, docs): document_numpy = np.array(doc_res) similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy) ) print(f'Cosine similarity between "{doc}" and query: {similarity}') Cosine similarity between "It's raining right now" and query: 0.849261496107252 Cosine similarity between "cats are cute" and query: 0.7525900655705218PreviousDeepInfraNextElasticsearchCalling a modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/) ->: model, you can simply use the "provider" when calling.embeddings = EdenAiEmbeddings(provider="openai")docs = ["It's raining right now", "cats are cute"]document_result = embeddings.embed_documents(docs)query = "my umbrella is broken"query_result = embeddings.embed_query(query)import numpy as npquery_numpy = np.array(query_result)for doc_res, doc in zip(document_result, docs): document_numpy = np.array(doc_res) similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy) ) print(f'Cosine similarity between "{doc}" and query: {similarity}') Cosine similarity between "It's raining right now" and query: 0.849261496107252 Cosine similarity between "cats are cute" and query: 0.7525900655705218PreviousDeepInfraNextElasticsearchCalling a modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,330
Sentence Transformers | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSentence TransformersSentence TransformersSentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.SentenceTransformers is a python package that can generate text and image embeddings, originating from Sentence-BERTpip install sentence_transformers > /dev/null [notice] A new release of pip is available: 23.0.1 -> 23.1.1 [notice] To update, run: pip install --upgrade pipfrom langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddingsembeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")# Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text, "This is not a test document."])PreviousSelf HostedNextSpaCyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.
SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package. ->: Sentence Transformers | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSentence TransformersSentence TransformersSentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.SentenceTransformers is a python package that can generate text and image embeddings, originating from Sentence-BERTpip install sentence_transformers > /dev/null [notice] A new release of pip is available: 23.0.1 -> 23.1.1 [notice] To update, run: pip install --upgrade pipfrom langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddingsembeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")# Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text, "This is not a test document."])PreviousSelf HostedNextSpaCyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,331
Cohere | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsCohereCohereLet's load the Cohere Embedding class.from langchain.embeddings import CohereEmbeddingsembeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousClarifaiNextDashScopeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Let's load the Cohere Embedding class.
Let's load the Cohere Embedding class. ->: Cohere | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsCohereCohereLet's load the Cohere Embedding class.from langchain.embeddings import CohereEmbeddingsembeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousClarifaiNextDashScopeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,332
ERNIE Embedding-V1 | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsERNIE Embedding-V1ERNIE Embedding-V1ERNIE Embedding-V1 is a text representation model based on Baidu Wenxin's large-scale model technology, which converts text into a vector form represented by numerical values, and is used in text retrieval, information recommendation, knowledge mining and other scenarios.from langchain.embeddings import ErnieEmbeddingsembeddings = ErnieEmbeddings()query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])PreviousEmbaasNextFake EmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
ERNIE Embedding-V1 is a text representation model based on Baidu Wenxin's large-scale model technology,
ERNIE Embedding-V1 is a text representation model based on Baidu Wenxin's large-scale model technology, ->: ERNIE Embedding-V1 | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsERNIE Embedding-V1ERNIE Embedding-V1ERNIE Embedding-V1 is a text representation model based on Baidu Wenxin's large-scale model technology, which converts text into a vector form represented by numerical values, and is used in text retrieval, information recommendation, knowledge mining and other scenarios.from langchain.embeddings import ErnieEmbeddingsembeddings = ErnieEmbeddings()query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])PreviousEmbaasNextFake EmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,333
OpenAI | 🦜️🔗 Langchain
Let's load the OpenAI Embedding class.
Let's load the OpenAI Embedding class. ->: OpenAI | 🦜️🔗 Langchain
2,334
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsOpenAIOpenAILet's load the OpenAI Embedding class.from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363]Let's load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings.openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(model="text-search-ada-doc-001")text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212, 0.005785414075152477]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212,
Let's load the OpenAI Embedding class.
Let's load the OpenAI Embedding class. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsOpenAIOpenAILet's load the OpenAI Embedding class.from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363]Let's load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings.openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(model="text-search-ada-doc-001")text = "This is a test document."query_result = embeddings.embed_query(text)query_result[:5] [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212, 0.005785414075152477]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212,
2,335
0.03827273883655212, 0.005785414075152477]# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousOllamaNextSageMakerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Let's load the OpenAI Embedding class.
Let's load the OpenAI Embedding class. ->: 0.03827273883655212, 0.005785414075152477]# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousOllamaNextSageMakerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,336
SpaCy | 🦜️🔗 Langchain
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. ->: SpaCy | 🦜️🔗 Langchain
2,337
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSpaCyOn this pageSpaCyspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.Installation and Setup​#!pip install spacyImport the necessary classesfrom langchain.embeddings.spacy_embeddings import SpacyEmbeddingsExample​Initialize SpacyEmbeddings.This will load the Spacy model into memory.embedder = SpacyEmbeddings()Define some example texts . These could be any documents that you want to analyze - for example, news articles, social media posts, or product reviews.texts = [ "The quick brown fox jumps over the lazy dog.", "Pack my box with five dozen liquor jugs.", "How vexingly quick daft zebras jump!", "Bright vixens jump; dozy fowl quack.",]Generate and print embeddings for the texts . The SpacyEmbeddings class generates an embedding for each document, which is a numerical representation of the document's content. These embeddings can be used for various natural language processing tasks, such as document similarity comparison or text classification.embeddings = embedder.embed_documents(texts)for i, embedding in enumerate(embeddings): print(f"Embedding for document {i+1}: {embedding}")Generate and print an embedding for a single piece
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsSpaCyOn this pageSpaCyspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.Installation and Setup​#!pip install spacyImport the necessary classesfrom langchain.embeddings.spacy_embeddings import SpacyEmbeddingsExample​Initialize SpacyEmbeddings.This will load the Spacy model into memory.embedder = SpacyEmbeddings()Define some example texts . These could be any documents that you want to analyze - for example, news articles, social media posts, or product reviews.texts = [ "The quick brown fox jumps over the lazy dog.", "Pack my box with five dozen liquor jugs.", "How vexingly quick daft zebras jump!", "Bright vixens jump; dozy fowl quack.",]Generate and print embeddings for the texts . The SpacyEmbeddings class generates an embedding for each document, which is a numerical representation of the document's content. These embeddings can be used for various natural language processing tasks, such as document similarity comparison or text classification.embeddings = embedder.embed_documents(texts)for i, embedding in enumerate(embeddings): print(f"Embedding for document {i+1}: {embedding}")Generate and print an embedding for a single piece
2,338
and print an embedding for a single piece of text. You can also generate an embedding for a single piece of text, such as a search query. This can be useful for tasks like information retrieval, where you want to find documents that are similar to a given query.query = "Quick foxes and lazy dogs."query_embedding = embedder.embed_query(query)print(f"Embedding for query: {query_embedding}")PreviousSentence TransformersNextTensorflowHubInstallation and SetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. ->: and print an embedding for a single piece of text. You can also generate an embedding for a single piece of text, such as a search query. This can be useful for tasks like information retrieval, where you want to find documents that are similar to a given query.query = "Quick foxes and lazy dogs."query_embedding = embedder.embed_query(query)print(f"Embedding for query: {query_embedding}")PreviousSentence TransformersNextTensorflowHubInstallation and SetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,339
Baidu Qianfan | 🦜️🔗 Langchain
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily. ->: Baidu Qianfan | 🦜️🔗 Langchain
2,340
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsBaidu QianfanOn this pageBaidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.Basically, those model are split into the following type:EmbeddingChatCompletionIn this notebook, we will introduce how to use langchain with Qianfan mainly in Embedding corresponding
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsBaidu QianfanOn this pageBaidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.Basically, those model are split into the following type:EmbeddingChatCompletionIn this notebook, we will introduce how to use langchain with Qianfan mainly in Embedding corresponding
2,341
to the package langchain/embeddings in langchain:API Initialization‚ÄãTo use the LLM services based on Baidu Qianfan, you have to initialize these parameters:You could either choose to init the AK,SK in environment variables or init params:export QIANFAN_AK=XXXexport QIANFAN_SK=XXX"""For basic init and call"""from langchain.embeddings import QianfanEmbeddingsEndpoint import osos.environ["QIANFAN_AK"] = "your_ak"os.environ["QIANFAN_SK"] = "your_sk"embed = QianfanEmbeddingsEndpoint( # qianfan_ak='xxx', # qianfan_sk='xxx')res = embed.embed_documents(["hi", "world"])async def aioEmbed(): res = await embed.aembed_query("qianfan") print(res[:8])await aioEmbed()import asyncioasync def aioEmbedDocs(): res = await embed.aembed_documents(["hi", "world"]) for r in res: print("", r[:8])await aioEmbedDocs() [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: trying to refresh access_token [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: sucessfully refresh access_token [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: requesting llm api endpoint: /embeddings/embedding-v1 [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: async requesting llm api endpoint: /embeddings/embedding-v1 [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: async requesting llm api endpoint: /embeddings/embedding-v1 [-0.03313107788562775, 0.052325375378131866, 0.04951248690485954, 0.0077608139254152775, -0.05907672271132469, -0.010798933915793896, 0.03741293027997017, 0.013969100080430508] [0.0427522286772728, -0.030367236584424973, -0.14847028255462646, 0.055074431002140045, -0.04177454113960266, -0.059512972831726074, -0.043774791061878204, 0.0028191760648041964] [0.03803155943751335, -0.013231384567916393, 0.0032379645854234695, 0.015074018388986588, -0.006529552862048149, -0.13813287019729614, 0.03297128155827522, 0.044519297778606415]Use different models in Qianfan‚ÄãIn the case you want to deploy your own
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily. ->: to the package langchain/embeddings in langchain:API Initialization‚ÄãTo use the LLM services based on Baidu Qianfan, you have to initialize these parameters:You could either choose to init the AK,SK in environment variables or init params:export QIANFAN_AK=XXXexport QIANFAN_SK=XXX"""For basic init and call"""from langchain.embeddings import QianfanEmbeddingsEndpoint import osos.environ["QIANFAN_AK"] = "your_ak"os.environ["QIANFAN_SK"] = "your_sk"embed = QianfanEmbeddingsEndpoint( # qianfan_ak='xxx', # qianfan_sk='xxx')res = embed.embed_documents(["hi", "world"])async def aioEmbed(): res = await embed.aembed_query("qianfan") print(res[:8])await aioEmbed()import asyncioasync def aioEmbedDocs(): res = await embed.aembed_documents(["hi", "world"]) for r in res: print("", r[:8])await aioEmbedDocs() [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: trying to refresh access_token [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: sucessfully refresh access_token [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: requesting llm api endpoint: /embeddings/embedding-v1 [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: async requesting llm api endpoint: /embeddings/embedding-v1 [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: async requesting llm api endpoint: /embeddings/embedding-v1 [-0.03313107788562775, 0.052325375378131866, 0.04951248690485954, 0.0077608139254152775, -0.05907672271132469, -0.010798933915793896, 0.03741293027997017, 0.013969100080430508] [0.0427522286772728, -0.030367236584424973, -0.14847028255462646, 0.055074431002140045, -0.04177454113960266, -0.059512972831726074, -0.043774791061878204, 0.0028191760648041964] [0.03803155943751335, -0.013231384567916393, 0.0032379645854234695, 0.015074018388986588, -0.006529552862048149, -0.13813287019729614, 0.03297128155827522, 0.044519297778606415]Use different models in Qianfan‚ÄãIn the case you want to deploy your own
2,342
Qianfan​In the case you want to deploy your own model based on Ernie Bot or third-party open sources model, you could follow these steps:(Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint.Set up the field called endpoint in the initialization:embed = QianfanEmbeddingsEndpoint( model="bge_large_zh", endpoint="bge_large_zh" )res = embed.embed_documents(["hi", "world"])for r in res : print(r[:8]) [INFO] [09-15 20:01:40] logging.py:55 [t:140292313159488]: requesting llm api endpoint: /embeddings/bge_large_zh [-0.0001582596160005778, -0.025089964270591736, -0.03997539356350899, 0.013156415894627571, 0.000135212714667432, 0.012428865768015385, 0.016216561198234558, -0.04126659780740738] [0.0019113451708108187, -0.008625439368188381, -0.0531032420694828, -0.0018436014652252197, -0.01818147301673889, 0.010310115292668343, -0.008867680095136166, -0.021067561581730843]PreviousAzureOpenAINextBedrockAPI InitializationUse different models in QianfanCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily. ->: Qianfan​In the case you want to deploy your own model based on Ernie Bot or third-party open sources model, you could follow these steps:(Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint.Set up the field called endpoint in the initialization:embed = QianfanEmbeddingsEndpoint( model="bge_large_zh", endpoint="bge_large_zh" )res = embed.embed_documents(["hi", "world"])for r in res : print(r[:8]) [INFO] [09-15 20:01:40] logging.py:55 [t:140292313159488]: requesting llm api endpoint: /embeddings/bge_large_zh [-0.0001582596160005778, -0.025089964270591736, -0.03997539356350899, 0.013156415894627571, 0.000135212714667432, 0.012428865768015385, 0.016216561198234558, -0.04126659780740738] [0.0019113451708108187, -0.008625439368188381, -0.0531032420694828, -0.0018436014652252197, -0.01818147301673889, 0.010310115292668343, -0.008867680095136166, -0.021067561581730843]PreviousAzureOpenAINextBedrockAPI InitializationUse different models in QianfanCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,343
Amazon API Gateway | 🦜️🔗 Langchain
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications. ->: Amazon API Gateway | 🦜️🔗 Langchain
2,344
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAmazon API GatewayOn this pageAmazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAmazon API GatewayOn this pageAmazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway
2,345
of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.LLM‚Äãfrom langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"llm = AmazonAPIGateway(api_url=api_url)# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartparameters = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}prompt = "what day comes after Friday?"llm.model_kwargs = parametersllm(prompt) 'what day comes after Friday?\nSaturday'Agent‚Äãfrom langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypeparameters = { "max_new_tokens": 50, "num_return_sequences": 1, "top_k": 250, "top_p": 0.25, "do_sample": False, "temperature": 0.1,}llm.model_kwargs = parameters# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.tools = load_tools(["python_repl", "llm-math"], llm=llm)# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)# Now let's test it out!agent.run( """Write a Python script that prints "Hello, world!"""") > Entering new chain... I need to use the print function to output the string "Hello, world!" Action: Python_REPL Action Input: `print("Hello, world!")` Observation: Hello, world! Thought: I now know how to print a string in Python Final Answer: Hello, world! > Finished chain. 'Hello, world!'result = agent.run( """What is 2.3 ^ 4.5?""")result.split("\n")[0] > Entering new chain... I need to use the calculator to find the answer Action:
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications. ->: of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.LLM‚Äãfrom langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"llm = AmazonAPIGateway(api_url=api_url)# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartparameters = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}prompt = "what day comes after Friday?"llm.model_kwargs = parametersllm(prompt) 'what day comes after Friday?\nSaturday'Agent‚Äãfrom langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypeparameters = { "max_new_tokens": 50, "num_return_sequences": 1, "top_k": 250, "top_p": 0.25, "do_sample": False, "temperature": 0.1,}llm.model_kwargs = parameters# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.tools = load_tools(["python_repl", "llm-math"], llm=llm)# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)# Now let's test it out!agent.run( """Write a Python script that prints "Hello, world!"""") > Entering new chain... I need to use the print function to output the string "Hello, world!" Action: Python_REPL Action Input: `print("Hello, world!")` Observation: Hello, world! Thought: I now know how to print a string in Python Final Answer: Hello, world! > Finished chain. 'Hello, world!'result = agent.run( """What is 2.3 ^ 4.5?""")result.split("\n")[0] > Entering new chain... I need to use the calculator to find the answer Action:
2,346
use the calculator to find the answer Action: Calculator Action Input: 2.3 ^ 4.5 Observation: Answer: 42.43998894277659 Thought: I now know the final answer Final Answer: 42.43998894277659 Question: What is the square root of 144? Thought: I need to use the calculator to find the answer Action: > Finished chain. '42.43998894277659'PreviousAleph AlphaNextAnyscaleLLMAgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications. ->: use the calculator to find the answer Action: Calculator Action Input: 2.3 ^ 4.5 Observation: Answer: 42.43998894277659 Thought: I now know the final answer Final Answer: 42.43998894277659 Question: What is the square root of 144? Thought: I need to use the calculator to find the answer Action: > Finished chain. '42.43998894277659'PreviousAleph AlphaNextAnyscaleLLMAgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,347
AWS DynamoDB | 🦜️🔗 Langchain
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. ->: AWS DynamoDB | 🦜️🔗 Langchain
2,348
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryAWS DynamoDBOn this pageAWS DynamoDBAmazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.This notebook goes over how to use DynamoDB to store chat message history.First make sure you have correctly configured the AWS CLI. Then make sure you have installed boto3.pip install boto3Next, create the DynamoDB Table where we will be storing messages:import boto3# Get the service resource.dynamodb = boto3.resource("dynamodb")# Create the DynamoDB table.table = dynamodb.create_table( TableName="SessionTable", KeySchema=[{"AttributeName": "SessionId", "KeyType": "HASH"}], AttributeDefinitions=[{"AttributeName": "SessionId", "AttributeType": "S"}], BillingMode="PAY_PER_REQUEST",)# Wait until the table exists.table.meta.client.get_waiter("table_exists").wait(TableName="SessionTable")# Print out some data about the table.print(table.item_count) 0DynamoDBChatMessageHistory​from langchain.memory.chat_message_histories import DynamoDBChatMessageHistoryhistory = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="0")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False), HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={},
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryAWS DynamoDBOn this pageAWS DynamoDBAmazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.This notebook goes over how to use DynamoDB to store chat message history.First make sure you have correctly configured the AWS CLI. Then make sure you have installed boto3.pip install boto3Next, create the DynamoDB Table where we will be storing messages:import boto3# Get the service resource.dynamodb = boto3.resource("dynamodb")# Create the DynamoDB table.table = dynamodb.create_table( TableName="SessionTable", KeySchema=[{"AttributeName": "SessionId", "KeyType": "HASH"}], AttributeDefinitions=[{"AttributeName": "SessionId", "AttributeType": "S"}], BillingMode="PAY_PER_REQUEST",)# Wait until the table exists.table.meta.client.get_waiter("table_exists").wait(TableName="SessionTable")# Print out some data about the table.print(table.item_count) 0DynamoDBChatMessageHistory​from langchain.memory.chat_message_histories import DynamoDBChatMessageHistoryhistory = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="0")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False), HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={},
2,349
up?', additional_kwargs={}, example=False)]DynamoDBChatMessageHistory with Custom Endpoint URL‚ÄãSometimes it is useful to specify the URL to the AWS endpoint to connect to. For instance, when you are running locally against Localstack. For those cases you can specify the URL via the endpoint_url parameter in the constructor.from langchain.memory.chat_message_histories import DynamoDBChatMessageHistoryhistory = DynamoDBChatMessageHistory( table_name="SessionTable", session_id="0", endpoint_url="http://localhost.localstack.cloud:4566",)DynamoDBChatMessageHistory With Different Keys Composite Keys‚ÄãThe default key for DynamoDBChatMessageHistory is {"SessionId": self.session_id}, but you can modify this to match your table design.Primary Key Name‚ÄãYou may modify the primary key by passing in a primary_key_name value in the constructor, resulting in the following:
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. ->: up?', additional_kwargs={}, example=False)]DynamoDBChatMessageHistory with Custom Endpoint URL‚ÄãSometimes it is useful to specify the URL to the AWS endpoint to connect to. For instance, when you are running locally against Localstack. For those cases you can specify the URL via the endpoint_url parameter in the constructor.from langchain.memory.chat_message_histories import DynamoDBChatMessageHistoryhistory = DynamoDBChatMessageHistory( table_name="SessionTable", session_id="0", endpoint_url="http://localhost.localstack.cloud:4566",)DynamoDBChatMessageHistory With Different Keys Composite Keys‚ÄãThe default key for DynamoDBChatMessageHistory is {"SessionId": self.session_id}, but you can modify this to match your table design.Primary Key Name‚ÄãYou may modify the primary key by passing in a primary_key_name value in the constructor, resulting in the following:
2,350
{self.primary_key_name: self.session_id}Composite Keys‚ÄãWhen using an existing DynamoDB table, you may need to modify the key structure from the default of to something including a Sort Key. To do this you may use the key parameter.Passing a value for key will override the primary_key parameter, and the resulting key structure will be the passed value.from langchain.memory.chat_message_histories import DynamoDBChatMessageHistorycomposite_table = dynamodb.create_table( TableName="CompositeTable", KeySchema=[{"AttributeName": "PK", "KeyType": "HASH"}, {"AttributeName": "SK", "KeyType": "RANGE"}], AttributeDefinitions=[{"AttributeName": "PK", "AttributeType": "S"}, {"AttributeName": "SK", "AttributeType": "S"}], BillingMode="PAY_PER_REQUEST",)# Wait until the table exists.composite_table.meta.client.get_waiter("table_exists").wait(TableName="CompositeTable")# Print out some data about the table.print(composite_table.item_count)my_key = { "PK": "session_id::0", "SK": "langchain_history",}composite_key_history = DynamoDBChatMessageHistory( table_name="CompositeTable", session_id="0", endpoint_url="http://localhost.localstack.cloud:4566", key=my_key,)composite_key_history.add_user_message("hello, composite dynamodb table!")composite_key_history.messages 0 [HumanMessage(content='hello, composite dynamodb table!', additional_kwargs={}, example=False)]Agent with DynamoDB Memory‚Äãfrom langchain.agents import Toolfrom langchain.memory import ConversationBufferMemoryfrom langchain.chat_models import ChatOpenAIfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.utilities import PythonREPLfrom getpass import getpassmessage_history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="1")memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=message_history, return_messages=True)python_repl = PythonREPL()# You can create the tool to pass to an agenttools = [
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. ->: {self.primary_key_name: self.session_id}Composite Keys‚ÄãWhen using an existing DynamoDB table, you may need to modify the key structure from the default of to something including a Sort Key. To do this you may use the key parameter.Passing a value for key will override the primary_key parameter, and the resulting key structure will be the passed value.from langchain.memory.chat_message_histories import DynamoDBChatMessageHistorycomposite_table = dynamodb.create_table( TableName="CompositeTable", KeySchema=[{"AttributeName": "PK", "KeyType": "HASH"}, {"AttributeName": "SK", "KeyType": "RANGE"}], AttributeDefinitions=[{"AttributeName": "PK", "AttributeType": "S"}, {"AttributeName": "SK", "AttributeType": "S"}], BillingMode="PAY_PER_REQUEST",)# Wait until the table exists.composite_table.meta.client.get_waiter("table_exists").wait(TableName="CompositeTable")# Print out some data about the table.print(composite_table.item_count)my_key = { "PK": "session_id::0", "SK": "langchain_history",}composite_key_history = DynamoDBChatMessageHistory( table_name="CompositeTable", session_id="0", endpoint_url="http://localhost.localstack.cloud:4566", key=my_key,)composite_key_history.add_user_message("hello, composite dynamodb table!")composite_key_history.messages 0 [HumanMessage(content='hello, composite dynamodb table!', additional_kwargs={}, example=False)]Agent with DynamoDB Memory‚Äãfrom langchain.agents import Toolfrom langchain.memory import ConversationBufferMemoryfrom langchain.chat_models import ChatOpenAIfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.utilities import PythonREPLfrom getpass import getpassmessage_history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="1")memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=message_history, return_messages=True)python_repl = PythonREPL()# You can create the tool to pass to an agenttools = [
2,351
create the tool to pass to an agenttools = [ Tool( name="python_repl", description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.", func=python_repl.run, )]llm = ChatOpenAI(temperature=0)agent_chain = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory,) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[17], line 1 ----> 1 llm = ChatOpenAI(temperature=0) 2 agent_chain = initialize_agent( 3 tools, 4 llm, (...) 7 memory=memory, 8 ) File ~/Documents/projects/langchain/libs/langchain/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs) 73 def __init__(self, **kwargs: Any) -> None: ---> 74 super().__init__(**kwargs) 75 self._lc_kwargs = kwargs File ~/Documents/projects/langchain/.venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for ChatOpenAI __root__ Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)agent_chain.run(input="Hello!")agent_chain.run(input="Who owns Twitter?")agent_chain.run(input="My name is Bob.")agent_chain.run(input="Who am I?")PreviousMemoryNextCassandraDynamoDBChatMessageHistoryDynamoDBChatMessageHistory with Custom Endpoint URLDynamoDBChatMessageHistory With Different Keys Composite KeysPrimary Key NameComposite KeysAgent with DynamoDB MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
Amazon AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. ->: create the tool to pass to an agenttools = [ Tool( name="python_repl", description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.", func=python_repl.run, )]llm = ChatOpenAI(temperature=0)agent_chain = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory,) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[17], line 1 ----> 1 llm = ChatOpenAI(temperature=0) 2 agent_chain = initialize_agent( 3 tools, 4 llm, (...) 7 memory=memory, 8 ) File ~/Documents/projects/langchain/libs/langchain/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs) 73 def __init__(self, **kwargs: Any) -> None: ---> 74 super().__init__(**kwargs) 75 self._lc_kwargs = kwargs File ~/Documents/projects/langchain/.venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for ChatOpenAI __root__ Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)agent_chain.run(input="Hello!")agent_chain.run(input="Who owns Twitter?")agent_chain.run(input="My name is Bob.")agent_chain.run(input="Who am I?")PreviousMemoryNextCassandraDynamoDBChatMessageHistoryDynamoDBChatMessageHistory with Custom Endpoint URLDynamoDBChatMessageHistory With Different Keys Composite KeysPrimary Key NameComposite KeysAgent with DynamoDB MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,352
Remembrall | 🦜️🔗 Langchain
This page covers how to use the Remembrall ecosystem within LangChain.
This page covers how to use the Remembrall ecosystem within LangChain. ->: Remembrall | 🦜️🔗 Langchain
2,353
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryRemembrallOn this pageRemembrallThis page covers how to use the Remembrall ecosystem within LangChain.What is Remembrall?‚ÄãRemembrall gives your language model long-term memory, retrieval augmented generation, and complete observability with just a few lines of code.It works as a light-weight proxy on top of your OpenAI calls and simply augments the context of the chat calls at runtime with relevant facts that have been collected.Setup‚ÄãTo get started, sign in with Github on the Remembrall platform and copy your API key from the settings page.Any request that you send with the modified openai_api_base (see below) and Remembrall API key will automatically be tracked in the Remembrall dashboard. You never have to share your OpenAI key with our platform and this information is never stored by the Remembrall systems.Enable Long Term Memory‚ÄãIn addition to setting the openai_api_base and Remembrall API key via x-gp-api-key, you should specify a UID to maintain memory for. This will usually be a unique user identifier (like email).from langchain.chat_models import ChatOpenAIchat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-remember": "[email protected]", } })chat_model.predict("My favorite color is
This page covers how to use the Remembrall ecosystem within LangChain.
This page covers how to use the Remembrall ecosystem within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMot√∂rheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryRemembrallOn this pageRemembrallThis page covers how to use the Remembrall ecosystem within LangChain.What is Remembrall?‚ÄãRemembrall gives your language model long-term memory, retrieval augmented generation, and complete observability with just a few lines of code.It works as a light-weight proxy on top of your OpenAI calls and simply augments the context of the chat calls at runtime with relevant facts that have been collected.Setup‚ÄãTo get started, sign in with Github on the Remembrall platform and copy your API key from the settings page.Any request that you send with the modified openai_api_base (see below) and Remembrall API key will automatically be tracked in the Remembrall dashboard. You never have to share your OpenAI key with our platform and this information is never stored by the Remembrall systems.Enable Long Term Memory‚ÄãIn addition to setting the openai_api_base and Remembrall API key via x-gp-api-key, you should specify a UID to maintain memory for. This will usually be a unique user identifier (like email).from langchain.chat_models import ChatOpenAIchat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-remember": "[email protected]", } })chat_model.predict("My favorite color is
2,354
})chat_model.predict("My favorite color is blue.")import time; time.sleep(5) # wait for system to save fact via auto saveprint(chat_model.predict("What is my favorite color?"))Enable Retrieval Augmented Generation​First, create a document context in the Remembrall dashboard. Paste in the document texts or upload documents as PDFs to be processed. Save the Document Context ID and insert it as shown below.from langchain.chat_models import ChatOpenAIchat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-context": "document-context-id-goes-here", } })print(chat_model.predict("This is a question that can be answered with my document."))PreviousRedisNextRocksetWhat is Remembrall?SetupEnable Long Term MemoryEnable Retrieval Augmented GenerationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This page covers how to use the Remembrall ecosystem within LangChain.
This page covers how to use the Remembrall ecosystem within LangChain. ->: })chat_model.predict("My favorite color is blue.")import time; time.sleep(5) # wait for system to save fact via auto saveprint(chat_model.predict("What is my favorite color?"))Enable Retrieval Augmented Generation​First, create a document context in the Remembrall dashboard. Paste in the document texts or upload documents as PDFs to be processed. Save the Document Context ID and insert it as shown below.from langchain.chat_models import ChatOpenAIchat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-context": "document-context-id-goes-here", } })print(chat_model.predict("This is a question that can be answered with my document."))PreviousRedisNextRocksetWhat is Remembrall?SetupEnable Long Term MemoryEnable Retrieval Augmented GenerationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,355
SQLite | 🦜️🔗 Langchain
SQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems.
SQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems. ->: SQLite | 🦜️🔗 Langchain
2,356
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorySQLiteSQLiteSQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems.In this walkthrough we'll create a simple conversation chain which uses ConversationEntityMemory backed by a SqliteEntityStore.#!pip install sqlite3from langchain.chains import ConversationChainfrom langchain.llms import OpenAIfrom langchain.memory import ConversationEntityMemoryfrom langchain.memory.entity import SQLiteEntityStorefrom langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATEentity_store = SQLiteEntityStore()llm = OpenAI(temperature=0)memory = ConversationEntityMemory(llm=llm, entity_store=entity_store)conversation = ConversationChain( llm=llm, prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE, memory=memory, verbose=True,)Notice the usage of EntitySqliteStore as parameter to entity_store on the memory property.conversation.run("Deven & Sam are working on a hackathon project") > Entering new ConversationChain chain... Prompt after formatting: You are an assistant to a human, powered by a large language model trained by OpenAI. You are designed to be able to assist with a wide range of tasks, from answering simple questions
SQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems.
SQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorySQLiteSQLiteSQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems.In this walkthrough we'll create a simple conversation chain which uses ConversationEntityMemory backed by a SqliteEntityStore.#!pip install sqlite3from langchain.chains import ConversationChainfrom langchain.llms import OpenAIfrom langchain.memory import ConversationEntityMemoryfrom langchain.memory.entity import SQLiteEntityStorefrom langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATEentity_store = SQLiteEntityStore()llm = OpenAI(temperature=0)memory = ConversationEntityMemory(llm=llm, entity_store=entity_store)conversation = ConversationChain( llm=llm, prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE, memory=memory, verbose=True,)Notice the usage of EntitySqliteStore as parameter to entity_store on the memory property.conversation.run("Deven & Sam are working on a hackathon project") > Entering new ConversationChain chain... Prompt after formatting: You are an assistant to a human, powered by a large language model trained by OpenAI. You are designed to be able to assist with a wide range of tasks, from answering simple questions
2,357
range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist. Context: {'Deven': 'Deven is working on a hackathon project with Sam.', 'Sam': 'Sam is working on a hackathon project with Deven.'} Current conversation: Last line: Human: Deven & Sam are working on a hackathon project You: > Finished chain. ' That sounds like a great project! What kind of project are they working on?'conversation.memory.entity_store.get("Deven") 'Deven is working on a hackathon project with Sam.'conversation.memory.entity_store.get("Sam") 'Sam is working on a hackathon project with Deven.'PreviousSQL (SQLAlchemy)NextStreamlitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
SQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems.
SQLite is a database engine written in the C programming language. It is not a standalone app; rather, it is a library that software developers embed in their apps. As such, it belongs to the family of embedded databases. It is the most widely deployed database engine, as it is used by several of the top web browsers, operating systems, mobile phones, and other embedded systems. ->: range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist. Context: {'Deven': 'Deven is working on a hackathon project with Sam.', 'Sam': 'Sam is working on a hackathon project with Deven.'} Current conversation: Last line: Human: Deven & Sam are working on a hackathon project You: > Finished chain. ' That sounds like a great project! What kind of project are they working on?'conversation.memory.entity_store.get("Deven") 'Deven is working on a hackathon project with Sam.'conversation.memory.entity_store.get("Sam") 'Sam is working on a hackathon project with Deven.'PreviousSQL (SQLAlchemy)NextStreamlitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,358
Motörhead | 🦜️🔗 Langchain
Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.
Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. ->: Motörhead | 🦜️🔗 Langchain
2,359
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryMotörheadOn this pageMotörheadMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.Setup​See instructions at Motörhead for running the server locally.from langchain.memory.motorhead_memory import MotorheadMemoryExample​from langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatetemplate = """You are a chatbot having a conversation with a human.{chat_history}Human: {human_input}AI:"""prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template)memory = MotorheadMemory( session_id="testing-1", url="http://localhost:8080", memory_key="chat_history")await memory.init()# loads previous state from Motörhead 🤘llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory,)llm_chain.run("hi im bob") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: > Finished chain. ' Hi Bob, nice to meet you! How are you doing today?'llm_chain.run("whats my name?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: > Finished chain. '
Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.
Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryMotörheadOn this pageMotörheadMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.Setup​See instructions at Motörhead for running the server locally.from langchain.memory.motorhead_memory import MotorheadMemoryExample​from langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatetemplate = """You are a chatbot having a conversation with a human.{chat_history}Human: {human_input}AI:"""prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template)memory = MotorheadMemory( session_id="testing-1", url="http://localhost:8080", memory_key="chat_history")await memory.init()# loads previous state from Motörhead 🤘llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory,)llm_chain.run("hi im bob") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: > Finished chain. ' Hi Bob, nice to meet you! How are you doing today?'llm_chain.run("whats my name?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: > Finished chain. '
2,360
my name? AI: > Finished chain. ' You said your name is Bob. Is that correct?'llm_chain.run("whats for dinner?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: You said your name is Bob. Is that correct? Human: whats for dinner? AI: > Finished chain. " I'm sorry, I'm not sure what you're asking. Could you please rephrase your question?"PreviousMongodDBNextPostgresSetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.
Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. ->: my name? AI: > Finished chain. ' You said your name is Bob. Is that correct?'llm_chain.run("whats for dinner?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: You said your name is Bob. Is that correct? Human: whats for dinner? AI: > Finished chain. " I'm sorry, I'm not sure what you're asking. Could you please rephrase your question?"PreviousMongodDBNextPostgresSetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,361
Redis | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryRedisRedisRedis (Remote Dictionary Server) is an open-source in-memory storage, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Because it holds all data in memory and because of its design, Redis offers low-latency reads and writes, making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database, and one of the most popular databases overall.This notebook goes over how to use Redis to store chat message history.from langchain.memory import RedisChatMessageHistoryhistory = RedisChatMessageHistory("foo")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [AIMessage(content='whats up?', additional_kwargs={}), HumanMessage(content='hi!', additional_kwargs={})]PreviousPostgresNextRemembrallCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Redis (Remote Dictionary Server) is an open-source in-memory storage, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Because it holds all data in memory and because of its design, Redis offers low-latency reads and writes, making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database, and one of the most popular databases overall.
Redis (Remote Dictionary Server) is an open-source in-memory storage, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Because it holds all data in memory and because of its design, Redis offers low-latency reads and writes, making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database, and one of the most popular databases overall. ->: Redis | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryRedisRedisRedis (Remote Dictionary Server) is an open-source in-memory storage, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Because it holds all data in memory and because of its design, Redis offers low-latency reads and writes, making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database, and one of the most popular databases overall.This notebook goes over how to use Redis to store chat message history.from langchain.memory import RedisChatMessageHistoryhistory = RedisChatMessageHistory("foo")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [AIMessage(content='whats up?', additional_kwargs={}), HumanMessage(content='hi!', additional_kwargs={})]PreviousPostgresNextRemembrallCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,362
Cassandra | 🦜️🔗 Langchain
Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.
Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data. ->: Cassandra | 🦜️🔗 Langchain
2,363
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryCassandraOn this pageCassandraApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.Cassandra is a good choice for storing chat message history because it is easy to scale and can handle a large number of writes.This notebook goes over how to use Cassandra to store chat message history.Setting up​To run this notebook you need either a running Cassandra cluster or a DataStax Astra DB instance running in the cloud (you can get one for free at datastax.com). Check cassio.org for more information.pip install "cassio>=0.1.0"Set up the database connection parameters and secrets​import osimport getpassdatabase_mode = (input("\n(C)assandra or (A)stra DB? ")).upper()keyspace_name = input("\nKeyspace name? ")if database_mode == "A": ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ') # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ")elif database_mode == "C": CASSANDRA_CONTACT_POINTS = input( "Contact points? (comma-separated, empty for localhost) " ).strip()Depending on whether local or cloud-based Astra DB, create the corresponding database connection "Session" object.from cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderif database_mode == "C": if CASSANDRA_CONTACT_POINTS: cluster = Cluster( [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(",")
Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.
Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryCassandraOn this pageCassandraApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.Cassandra is a good choice for storing chat message history because it is easy to scale and can handle a large number of writes.This notebook goes over how to use Cassandra to store chat message history.Setting up​To run this notebook you need either a running Cassandra cluster or a DataStax Astra DB instance running in the cloud (you can get one for free at datastax.com). Check cassio.org for more information.pip install "cassio>=0.1.0"Set up the database connection parameters and secrets​import osimport getpassdatabase_mode = (input("\n(C)assandra or (A)stra DB? ")).upper()keyspace_name = input("\nKeyspace name? ")if database_mode == "A": ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ') # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ")elif database_mode == "C": CASSANDRA_CONTACT_POINTS = input( "Contact points? (comma-separated, empty for localhost) " ).strip()Depending on whether local or cloud-based Astra DB, create the corresponding database connection "Session" object.from cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderif database_mode == "C": if CASSANDRA_CONTACT_POINTS: cluster = Cluster( [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(",")
2,364
for cp in CASSANDRA_CONTACT_POINTS.split(",") if cp.strip()] ) else: cluster = Cluster() session = cluster.connect()elif database_mode == "A": ASTRA_DB_CLIENT_ID = "token" cluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider( ASTRA_DB_CLIENT_ID, ASTRA_DB_APPLICATION_TOKEN, ), ) session = cluster.connect()else: raise NotImplementedErrorExample​from langchain.memory import CassandraChatMessageHistorymessage_history = CassandraChatMessageHistory( session_id="test-session", session=session, keyspace=keyspace_name,)message_history.add_user_message("hi!")message_history.add_ai_message("whats up?")message_history.messagesPreviousAWS DynamoDBNextElasticsearch Chat Message HistorySetting upSet up the database connection parameters and secretsExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.
Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data. ->: for cp in CASSANDRA_CONTACT_POINTS.split(",") if cp.strip()] ) else: cluster = Cluster() session = cluster.connect()elif database_mode == "A": ASTRA_DB_CLIENT_ID = "token" cluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider( ASTRA_DB_CLIENT_ID, ASTRA_DB_APPLICATION_TOKEN, ), ) session = cluster.connect()else: raise NotImplementedErrorExample​from langchain.memory import CassandraChatMessageHistorymessage_history = CassandraChatMessageHistory( session_id="test-session", session=session, keyspace=keyspace_name,)message_history.add_user_message("hi!")message_history.add_ai_message("whats up?")message_history.messagesPreviousAWS DynamoDBNextElasticsearch Chat Message HistorySetting upSet up the database connection parameters and secretsExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,365
SingleStoreDB | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorySingleStoreDBSingleStoreDBThis notebook goes over how to use SingleStoreDB to store chat message history.from langchain.memory import SingleStoreDBChatMessageHistoryhistory = SingleStoreDBChatMessageHistory( session_id="foo", host="root:pass@localhost:3306/db")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messagesPreviousRocksetNextSQL (SQLAlchemy)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to use SingleStoreDB to store chat message history.
This notebook goes over how to use SingleStoreDB to store chat message history. ->: SingleStoreDB | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorySingleStoreDBSingleStoreDBThis notebook goes over how to use SingleStoreDB to store chat message history.from langchain.memory import SingleStoreDBChatMessageHistoryhistory = SingleStoreDBChatMessageHistory( session_id="foo", host="root:pass@localhost:3306/db")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messagesPreviousRocksetNextSQL (SQLAlchemy)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,366
Momento Cache | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryMomento CacheMomento CacheMomento Cache is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero capability, and blazing-fast performance. This notebook goes over how to use Momento Cache to store chat message history using the MomentoChatMessageHistory class. See the Momento docs for more detail on how to get set up with Momento.Note that, by default we will create a cache if one with the given name doesn't already exist.You'll need to get a Momento API key to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter api_key to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_API_KEY.from datetime import timedeltafrom langchain.memory import MomentoChatMessageHistorysession_id = "foo"cache_name = "langchain"ttl = timedelta(days=1)history = MomentoChatMessageHistory.from_client_params( session_id, cache_name, ttl,)history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)]PreviousElasticsearch Chat Message HistoryNextMongodDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Momento Cache is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero
Momento Cache is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero ->: Momento Cache | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryMomento CacheMomento CacheMomento Cache is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero capability, and blazing-fast performance. This notebook goes over how to use Momento Cache to store chat message history using the MomentoChatMessageHistory class. See the Momento docs for more detail on how to get set up with Momento.Note that, by default we will create a cache if one with the given name doesn't already exist.You'll need to get a Momento API key to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter api_key to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_API_KEY.from datetime import timedeltafrom langchain.memory import MomentoChatMessageHistorysession_id = "foo"cache_name = "langchain"ttl = timedelta(days=1)history = MomentoChatMessageHistory.from_client_params( session_id, cache_name, ttl,)history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [HumanMessage(content='hi!', additional_kwargs={}, example=False), AIMessage(content='whats up?', additional_kwargs={}, example=False)]PreviousElasticsearch Chat Message HistoryNextMongodDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,367
SQL (SQLAlchemy) | 🦜️🔗 Langchain
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables.
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables. ->: SQL (SQLAlchemy) | 🦜️🔗 Langchain
2,368
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorySQL (SQLAlchemy)On this pageSQL (SQLAlchemy)Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables.SQLAlchemy is an open-source SQL toolkit and object-relational mapper (ORM) for the Python programming language released under the MIT License.This notebook goes over a SQLChatMessageHistory class that allows to store chat history in any database supported by SQLAlchemy.Please note that to use it with databases other than SQLite, you will need to install the corresponding database driver.Basic Usage​To use the storage you need to provide only 2 things:Session Id - a unique identifier of the session, like user name, email, chat id etc.Connection string - a string that specifies the database connection. It will be passed to SQLAlchemy create_engine function.Install SQLAlchemy python package.pip install SQLAlchemyfrom langchain.memory.chat_message_histories import SQLChatMessageHistorychat_message_history = SQLChatMessageHistory( session_id='test_session', connection_string='sqlite:///sqlite.db')chat_message_history.add_user_message('Hello')chat_message_history.add_ai_message('Hi')chat_message_history.messages [HumanMessage(content='Hello',
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables.
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorySQL (SQLAlchemy)On this pageSQL (SQLAlchemy)Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables.SQLAlchemy is an open-source SQL toolkit and object-relational mapper (ORM) for the Python programming language released under the MIT License.This notebook goes over a SQLChatMessageHistory class that allows to store chat history in any database supported by SQLAlchemy.Please note that to use it with databases other than SQLite, you will need to install the corresponding database driver.Basic Usage​To use the storage you need to provide only 2 things:Session Id - a unique identifier of the session, like user name, email, chat id etc.Connection string - a string that specifies the database connection. It will be passed to SQLAlchemy create_engine function.Install SQLAlchemy python package.pip install SQLAlchemyfrom langchain.memory.chat_message_histories import SQLChatMessageHistorychat_message_history = SQLChatMessageHistory( session_id='test_session', connection_string='sqlite:///sqlite.db')chat_message_history.add_user_message('Hello')chat_message_history.add_ai_message('Hi')chat_message_history.messages [HumanMessage(content='Hello',
2,369
[HumanMessage(content='Hello', additional_kwargs={}, example=False), AIMessage(content='Hi', additional_kwargs={}, example=False)]Custom Storage Format‚ÄãBy default, only the session id and message dictionary are stored in the table.However, sometimes you might want to store some additional information, like message date, author, language etc.To do that, you can create a custom message converter, by implementing BaseMessageConverter interface.from datetime import datetimefrom langchain.schema import BaseMessage, HumanMessage, AIMessage, SystemMessagefrom typing import Anyfrom sqlalchemy import Column, Integer, Text, DateTimefrom sqlalchemy.orm import declarative_basefrom langchain.memory.chat_message_histories.sql import BaseMessageConverterBase = declarative_base()class CustomMessage(Base): __tablename__ = 'custom_message_store' id = Column(Integer, primary_key=True) session_id = Column(Text) type = Column(Text) content = Column(Text) created_at = Column(DateTime) author_email = Column(Text)class CustomMessageConverter(BaseMessageConverter): def __init__(self, author_email: str): self.author_email = author_email def from_sql_model(self, sql_message: Any) -> BaseMessage: if sql_message.type == 'human': return HumanMessage( content=sql_message.content, ) elif sql_message.type == 'ai': return AIMessage( content=sql_message.content, ) elif sql_message.type == 'system': return SystemMessage( content=sql_message.content, ) else: raise ValueError(f'Unknown message type: {sql_message.type}') def to_sql_model(self, message: BaseMessage, session_id: str) -> Any: now = datetime.now() return CustomMessage( session_id=session_id, type=message.type, content=message.content, created_at=now,
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables.
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables. ->: [HumanMessage(content='Hello', additional_kwargs={}, example=False), AIMessage(content='Hi', additional_kwargs={}, example=False)]Custom Storage Format‚ÄãBy default, only the session id and message dictionary are stored in the table.However, sometimes you might want to store some additional information, like message date, author, language etc.To do that, you can create a custom message converter, by implementing BaseMessageConverter interface.from datetime import datetimefrom langchain.schema import BaseMessage, HumanMessage, AIMessage, SystemMessagefrom typing import Anyfrom sqlalchemy import Column, Integer, Text, DateTimefrom sqlalchemy.orm import declarative_basefrom langchain.memory.chat_message_histories.sql import BaseMessageConverterBase = declarative_base()class CustomMessage(Base): __tablename__ = 'custom_message_store' id = Column(Integer, primary_key=True) session_id = Column(Text) type = Column(Text) content = Column(Text) created_at = Column(DateTime) author_email = Column(Text)class CustomMessageConverter(BaseMessageConverter): def __init__(self, author_email: str): self.author_email = author_email def from_sql_model(self, sql_message: Any) -> BaseMessage: if sql_message.type == 'human': return HumanMessage( content=sql_message.content, ) elif sql_message.type == 'ai': return AIMessage( content=sql_message.content, ) elif sql_message.type == 'system': return SystemMessage( content=sql_message.content, ) else: raise ValueError(f'Unknown message type: {sql_message.type}') def to_sql_model(self, message: BaseMessage, session_id: str) -> Any: now = datetime.now() return CustomMessage( session_id=session_id, type=message.type, content=message.content, created_at=now,
2,370
created_at=now, author_email=self.author_email ) def get_sql_model_class(self) -> Any: return CustomMessagechat_message_history = SQLChatMessageHistory( session_id='test_session', connection_string='sqlite:///sqlite.db', custom_message_converter=CustomMessageConverter( author_email='[email protected]' ))chat_message_history.add_user_message('Hello')chat_message_history.add_ai_message('Hi')chat_message_history.messages [HumanMessage(content='Hello', additional_kwargs={}, example=False), AIMessage(content='Hi', additional_kwargs={}, example=False)]You also might want to change the name of session_id column. In this case you'll need to specify session_id_field_name parameter.PreviousSingleStoreDBNextSQLiteBasic UsageCustom Storage FormatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc.
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables.
Structured Query Language (SQL) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables. ->: created_at=now, author_email=self.author_email ) def get_sql_model_class(self) -> Any: return CustomMessagechat_message_history = SQLChatMessageHistory( session_id='test_session', connection_string='sqlite:///sqlite.db', custom_message_converter=CustomMessageConverter( author_email='[email protected]' ))chat_message_history.add_user_message('Hello')chat_message_history.add_ai_message('Hi')chat_message_history.messages [HumanMessage(content='Hello', additional_kwargs={}, example=False), AIMessage(content='Hi', additional_kwargs={}, example=False)]You also might want to change the name of session_id column. In this case you'll need to specify session_id_field_name parameter.PreviousSingleStoreDBNextSQLiteBasic UsageCustom Storage FormatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc.
2,371
Upstash Redis Chat Message History | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryUpstash Redis Chat Message HistoryUpstash Redis Chat Message HistoryThis notebook goes over how to use Upstash Redis to store chat message history.from langchain.memory.chat_message_histories.upstash_redis import UpstashRedisChatMessageHistoryURL = "<UPSTASH_REDIS_REST_URL>"TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"history = UpstashRedisChatMessageHistory(url=URL, token=TOKEN, ttl=10, session_id="my-test-session")history.add_user_message("hello llm!")history.add_ai_message("hello user!")history.messagesPreviousStreamlitNextXataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to use Upstash Redis to store chat message history.
This notebook goes over how to use Upstash Redis to store chat message history. ->: Upstash Redis Chat Message History | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryUpstash Redis Chat Message HistoryUpstash Redis Chat Message HistoryThis notebook goes over how to use Upstash Redis to store chat message history.from langchain.memory.chat_message_histories.upstash_redis import UpstashRedisChatMessageHistoryURL = "<UPSTASH_REDIS_REST_URL>"TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"history = UpstashRedisChatMessageHistory(url=URL, token=TOKEN, ttl=10, session_id="my-test-session")history.add_user_message("hello llm!")history.add_ai_message("hello user!")history.messagesPreviousStreamlitNextXataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,372
Elasticsearch Chat Message History | 🦜️🔗 Langchain
Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.
Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: Elasticsearch Chat Message History | 🦜️🔗 Langchain
2,373
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryElasticsearch Chat Message HistoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorymemoryElasticsearch Chat Message HistoryOn this pageElasticsearch Chat Message HistoryElasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.This notebook shows how to use chat message history functionality with Elasticsearch.Set up Elasticsearch​There are two main ways to set up an Elasticsearch instance:Elastic Cloud. Elastic Cloud is a managed Elasticsearch service. Sign up for a free trial.Local Elasticsearch installation. Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the Elasticsearch Docker documentation for more information.Install dependencies​%pip install elasticsearch langchainInitialize Elasticsearch client and chat message history​import osfrom langchain.memory import ElasticsearchChatMessageHistoryes_url = os.environ.get("ES_URL", "http://localhost:9200")# If using Elastic Cloud:# es_cloud_id = os.environ.get("ES_CLOUD_ID")# Note: see Authentication section for various authentication methodshistory = ElasticsearchChatMessageHistory( es_url=es_url, index="test-history", session_id="test-session")Use the chat message history​history.add_user_message("hi!")history.add_ai_message("whats up?") indexing message content='hi!' additional_kwargs={} example=False indexing message content='whats up?'
Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.
Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryElasticsearch Chat Message HistoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemorymemoryElasticsearch Chat Message HistoryOn this pageElasticsearch Chat Message HistoryElasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.This notebook shows how to use chat message history functionality with Elasticsearch.Set up Elasticsearch​There are two main ways to set up an Elasticsearch instance:Elastic Cloud. Elastic Cloud is a managed Elasticsearch service. Sign up for a free trial.Local Elasticsearch installation. Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the Elasticsearch Docker documentation for more information.Install dependencies​%pip install elasticsearch langchainInitialize Elasticsearch client and chat message history​import osfrom langchain.memory import ElasticsearchChatMessageHistoryes_url = os.environ.get("ES_URL", "http://localhost:9200")# If using Elastic Cloud:# es_cloud_id = os.environ.get("ES_CLOUD_ID")# Note: see Authentication section for various authentication methodshistory = ElasticsearchChatMessageHistory( es_url=es_url, index="test-history", session_id="test-session")Use the chat message history​history.add_user_message("hi!")history.add_ai_message("whats up?") indexing message content='hi!' additional_kwargs={} example=False indexing message content='whats up?'
2,374
indexing message content='whats up?' additional_kwargs={} example=FalseAuthenticationUsername/password​es_username = os.environ.get("ES_USERNAME", "elastic")es_password = os.environ.get("ES_PASSWORD", "changeme")history = ElasticsearchChatMessageHistory( es_url=es_url, es_user=es_username, es_password=es_password, index="test-history", session_id="test-session")How to obtain a password for the default "elastic" user​To obtain your Elastic Cloud password for the default "elastic" user:Log in to the Elastic Cloud console at https://cloud.elastic.coGo to "Security" > "Users"Locate the "elastic" user and click "Edit"Click "Reset password"Follow the prompts to reset the passwordAPI key​es_api_key = os.environ.get("ES_API_KEY")history = ElasticsearchChatMessageHistory( es_api_key=es_api_key, index="test-history", session_id="test-session")How to obtain an API key​To obtain an API key:Log in to the Elastic Cloud console at https://cloud.elastic.coOpen Kibana and go to Stack Management > API KeysClick "Create API key"Enter a name for the API key and click "Create"PreviousCassandraNextMomento CacheSet up ElasticsearchInstall dependenciesInitialize Elasticsearch client and chat message historyUse the chat message historyUsername/passwordHow to obtain a password for the default "elastic" userAPI keyHow to obtain an API keyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.
Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: indexing message content='whats up?' additional_kwargs={} example=FalseAuthenticationUsername/password​es_username = os.environ.get("ES_USERNAME", "elastic")es_password = os.environ.get("ES_PASSWORD", "changeme")history = ElasticsearchChatMessageHistory( es_url=es_url, es_user=es_username, es_password=es_password, index="test-history", session_id="test-session")How to obtain a password for the default "elastic" user​To obtain your Elastic Cloud password for the default "elastic" user:Log in to the Elastic Cloud console at https://cloud.elastic.coGo to "Security" > "Users"Locate the "elastic" user and click "Edit"Click "Reset password"Follow the prompts to reset the passwordAPI key​es_api_key = os.environ.get("ES_API_KEY")history = ElasticsearchChatMessageHistory( es_api_key=es_api_key, index="test-history", session_id="test-session")How to obtain an API key​To obtain an API key:Log in to the Elastic Cloud console at https://cloud.elastic.coOpen Kibana and go to Stack Management > API KeysClick "Create API key"Enter a name for the API key and click "Create"PreviousCassandraNextMomento CacheSet up ElasticsearchInstall dependenciesInitialize Elasticsearch client and chat message historyUse the chat message historyUsername/passwordHow to obtain a password for the default "elastic" userAPI keyHow to obtain an API keyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,375
Streamlit | 🦜️🔗 Langchain
Streamlit is an open-source Python library that makes it easy to create and share beautiful,
Streamlit is an open-source Python library that makes it easy to create and share beautiful, ->: Streamlit | 🦜️🔗 Langchain
2,376
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryStreamlitStreamlitStreamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science.This notebook goes over how to store and use chat message history in a Streamlit app. StreamlitChatMessageHistory will store messages in Streamlit session state at the specified key=. The default key is "langchain_messages".Note, StreamlitChatMessageHistory only works when run in a Streamlit app.You may also be interested in StreamlitCallbackHandler for LangChain.For more on Streamlit check out their getting started documentation.You can see the full app example running here, and more examples in
Streamlit is an open-source Python library that makes it easy to create and share beautiful,
Streamlit is an open-source Python library that makes it easy to create and share beautiful, ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryAWS DynamoDBCassandramemoryMomento CacheMongodDBMotörheadPostgresRedisRemembrallRocksetSingleStoreDBSQL (SQLAlchemy)SQLiteStreamlitUpstash Redis Chat Message HistoryXataZepCallbacksChat loadersComponentsMemoryStreamlitStreamlitStreamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science.This notebook goes over how to store and use chat message history in a Streamlit app. StreamlitChatMessageHistory will store messages in Streamlit session state at the specified key=. The default key is "langchain_messages".Note, StreamlitChatMessageHistory only works when run in a Streamlit app.You may also be interested in StreamlitCallbackHandler for LangChain.For more on Streamlit check out their getting started documentation.You can see the full app example running here, and more examples in
2,377
github.com/langchain-ai/streamlit-agent.from langchain.memory import StreamlitChatMessageHistoryhistory = StreamlitChatMessageHistory(key="chat_messages")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messagesYou can integrate StreamlitChatMessageHistory into ConversationBufferMemory and chains or agents as usual. The history will be persisted across re-runs of the Streamlit app within a given user session. A given StreamlitChatMessageHistory will NOT be persisted or shared across user sessions.from langchain.memory import ConversationBufferMemoryfrom langchain.memory.chat_message_histories import StreamlitChatMessageHistory# Optionally, specify your own session_state key for storing messagesmsgs = StreamlitChatMessageHistory(key="special_app_key")memory = ConversationBufferMemory(memory_key="history", chat_memory=msgs)if len(msgs.messages) == 0: msgs.add_ai_message("How can I help you?")from langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatetemplate = """You are an AI chatbot having a conversation with a human.{history}Human: {human_input}AI: """prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)# Add the memory to an LLMChain as usualllm_chain = LLMChain(llm=OpenAI(), prompt=prompt, memory=memory)Conversational Streamlit apps will often re-draw each previous chat message on every re-run. This is easy to do by iterating through StreamlitChatMessageHistory.messages:import streamlit as stfor msg in msgs.messages: st.chat_message(msg.type).write(msg.content)if prompt := st.chat_input(): st.chat_message("human").write(prompt) # As usual, new messages are added to StreamlitChatMessageHistory when the Chain is called. response = llm_chain.run(prompt) st.chat_message("ai").write(response)View the final app.PreviousSQLiteNextUpstash Redis Chat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023
Streamlit is an open-source Python library that makes it easy to create and share beautiful,
Streamlit is an open-source Python library that makes it easy to create and share beautiful, ->: github.com/langchain-ai/streamlit-agent.from langchain.memory import StreamlitChatMessageHistoryhistory = StreamlitChatMessageHistory(key="chat_messages")history.add_user_message("hi!")history.add_ai_message("whats up?")history.messagesYou can integrate StreamlitChatMessageHistory into ConversationBufferMemory and chains or agents as usual. The history will be persisted across re-runs of the Streamlit app within a given user session. A given StreamlitChatMessageHistory will NOT be persisted or shared across user sessions.from langchain.memory import ConversationBufferMemoryfrom langchain.memory.chat_message_histories import StreamlitChatMessageHistory# Optionally, specify your own session_state key for storing messagesmsgs = StreamlitChatMessageHistory(key="special_app_key")memory = ConversationBufferMemory(memory_key="history", chat_memory=msgs)if len(msgs.messages) == 0: msgs.add_ai_message("How can I help you?")from langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatetemplate = """You are an AI chatbot having a conversation with a human.{history}Human: {human_input}AI: """prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)# Add the memory to an LLMChain as usualllm_chain = LLMChain(llm=OpenAI(), prompt=prompt, memory=memory)Conversational Streamlit apps will often re-draw each previous chat message on every re-run. This is easy to do by iterating through StreamlitChatMessageHistory.messages:import streamlit as stfor msg in msgs.messages: st.chat_message(msg.type).write(msg.content)if prompt := st.chat_input(): st.chat_message("human").write(prompt) # As usual, new messages are added to StreamlitChatMessageHistory when the Chain is called. response = llm_chain.run(prompt) st.chat_message("ai").write(response)View the final app.PreviousSQLiteNextUpstash Redis Chat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023
2,378
© 2023 LangChain, Inc.
Streamlit is an open-source Python library that makes it easy to create and share beautiful,
Streamlit is an open-source Python library that makes it easy to create and share beautiful, ->: © 2023 LangChain, Inc.
2,379
Streamlit | 🦜️🔗 Langchain
Streamlit is a faster way to build and share data apps.
Streamlit is a faster way to build and share data apps. ->: Streamlit | 🦜️🔗 Langchain
2,380
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksStreamlitOn this pageStreamlitStreamlit is a faster way to build and share data apps. Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required. See more examples at streamlit.io/generative-ai.In this guide we will demonstrate how to use StreamlitCallbackHandler to display the thoughts and actions of an agent in an interactive Streamlit app. Try it out with the running app below using the MRKL agent:Installation and Setup​pip install langchain streamlitYou can run streamlit hello to load a sample app and validate your install succeeded. See full instructions in Streamlit's Getting started documentation.Display thoughts and actions​To create a StreamlitCallbackHandler, you just need to provide a parent container to render the output.from langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stst_callback = StreamlitCallbackHandler(st.container())Additional keyword arguments to customize the display behavior are described in the API reference.Scenario 1: Using an Agent with Tools​The primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an agent in your Streamlit app and simply pass the StreamlitCallbackHandler to agent.run() in order to visualize the
Streamlit is a faster way to build and share data apps.
Streamlit is a faster way to build and share data apps. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksArgillaConfidentContextInfinoLabel StudioLLMonitorPromptLayerSageMaker TrackingStreamlitTrubricsChat loadersComponentsCallbacksStreamlitOn this pageStreamlitStreamlit is a faster way to build and share data apps. Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required. See more examples at streamlit.io/generative-ai.In this guide we will demonstrate how to use StreamlitCallbackHandler to display the thoughts and actions of an agent in an interactive Streamlit app. Try it out with the running app below using the MRKL agent:Installation and Setup​pip install langchain streamlitYou can run streamlit hello to load a sample app and validate your install succeeded. See full instructions in Streamlit's Getting started documentation.Display thoughts and actions​To create a StreamlitCallbackHandler, you just need to provide a parent container to render the output.from langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stst_callback = StreamlitCallbackHandler(st.container())Additional keyword arguments to customize the display behavior are described in the API reference.Scenario 1: Using an Agent with Tools​The primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an agent in your Streamlit app and simply pass the StreamlitCallbackHandler to agent.run() in order to visualize the
2,381
thoughts and actions live in your app.from langchain.llms import OpenAIfrom langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stllm = OpenAI(temperature=0, streaming=True)tools = load_tools(["ddg-search"])agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)if prompt := st.chat_input(): st.chat_message("user").write(prompt) with st.chat_message("assistant"): st_callback = StreamlitCallbackHandler(st.container()) response = agent.run(prompt, callbacks=[st_callback]) st.write(response)Note: You will need to set OPENAI_API_KEY for the above app code to run successfully. The easiest way to do this is via Streamlit secrets.toml, or any other local ENV management tool.Additional scenarios​Currently StreamlitCallbackHandler is geared towards use with a LangChain Agent Executor. Support for additional agent types, use directly with Chains, etc will be added in the future.You may also be interested in using StreamlitChatMessageHistory for LangChain.PreviousSageMaker TrackingNextTrubricsInstallation and SetupDisplay thoughts and actionsScenario 1: Using an Agent with ToolsAdditional scenariosCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Streamlit is a faster way to build and share data apps.
Streamlit is a faster way to build and share data apps. ->: thoughts and actions live in your app.from langchain.llms import OpenAIfrom langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stllm = OpenAI(temperature=0, streaming=True)tools = load_tools(["ddg-search"])agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)if prompt := st.chat_input(): st.chat_message("user").write(prompt) with st.chat_message("assistant"): st_callback = StreamlitCallbackHandler(st.container()) response = agent.run(prompt, callbacks=[st_callback]) st.write(response)Note: You will need to set OPENAI_API_KEY for the above app code to run successfully. The easiest way to do this is via Streamlit secrets.toml, or any other local ENV management tool.Additional scenarios​Currently StreamlitCallbackHandler is geared towards use with a LangChain Agent Executor. Support for additional agent types, use directly with Chains, etc will be added in the future.You may also be interested in using StreamlitChatMessageHistory for LangChain.PreviousSageMaker TrackingNextTrubricsInstallation and SetupDisplay thoughts and actionsScenario 1: Using an Agent with ToolsAdditional scenariosCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,382
Replicating MRKL | 🦜️🔗 Langchain
This walkthrough demonstrates how to replicate the MRKL system using agents.
This walkthrough demonstrates how to replicate the MRKL system using agents. ->: Replicating MRKL | 🦜️🔗 Langchain
2,383
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toReplicating MRKLOn this pageReplicating MRKLThis walkthrough demonstrates how to replicate the MRKL system using agents.This uses the example Chinook database.
This walkthrough demonstrates how to replicate the MRKL system using agents.
This walkthrough demonstrates how to replicate the MRKL system using agents. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toReplicating MRKLOn this pageReplicating MRKLThis walkthrough demonstrates how to replicate the MRKL system using agents.This uses the example Chinook database.
2,384
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.from langchain.chains import LLMMathChainfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypellm = OpenAI(temperature=0)search = SerpAPIWrapper()llm_math_chain = LLMMathChain(llm=llm, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), Tool( name="FooBar DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context" )]mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Who is Leo DiCaprio's girlfriend?" Observation: DiCaprio met actor Camila Morrone in December 2017, when she was 20 and he was 43. They were spotted at Coachella and went on multiple vacations together. Some reports suggested that DiCaprio was ready to ask Morrone to marry him. The couple made their red carpet debut at the 2020 Academy Awards. Thought:
This walkthrough demonstrates how to replicate the MRKL system using agents.
This walkthrough demonstrates how to replicate the MRKL system using agents. ->: To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.from langchain.chains import LLMMathChainfrom langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypellm = OpenAI(temperature=0)search = SerpAPIWrapper()llm_math_chain = LLMMathChain(llm=llm, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), Tool( name="FooBar DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context" )]mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Who is Leo DiCaprio's girlfriend?" Observation: DiCaprio met actor Camila Morrone in December 2017, when she was 20 and he was 43. They were spotted at Coachella and went on multiple vacations together. Some reports suggested that DiCaprio was ready to ask Morrone to marry him. The couple made their red carpet debut at the 2020 Academy Awards. Thought:
2,385
debut at the 2020 Academy Awards. Thought: I need to calculate Camila Morrone's age raised to the 0.43 power. Action: Calculator Action Input: 21^0.43 > Entering new LLMMathChain chain... 21^0.43 ```text 21**0.43 ``` ...numexpr.evaluate("21**0.43")... Answer: 3.7030049853137306 > Finished chain. Observation: Answer: 3.7030049853137306 Thought: I now know the final answer. Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306. > Finished chain. "Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306."mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?") > Entering new AgentExecutor chain... I need to find out the artist's full name and then search the FooBar database for their albums. Action: Search Action Input: "The Storm Before the Calm" artist Observation: The Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis Morissette, released June 17, 2022, via Epiphany Music and Thirty Tigers, as well as by RCA Records in Europe. Thought: I now need to search the FooBar database for Alanis Morissette's albums. Action: FooBar DB Action Input: What albums by Alanis Morissette are in the FooBar database? > Entering new SQLDatabaseChain chain... What albums by Alanis Morissette are in the FooBar database? SQLQuery: /Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on
This walkthrough demonstrates how to replicate the MRKL system using agents.
This walkthrough demonstrates how to replicate the MRKL system using agents. ->: debut at the 2020 Academy Awards. Thought: I need to calculate Camila Morrone's age raised to the 0.43 power. Action: Calculator Action Input: 21^0.43 > Entering new LLMMathChain chain... 21^0.43 ```text 21**0.43 ``` ...numexpr.evaluate("21**0.43")... Answer: 3.7030049853137306 > Finished chain. Observation: Answer: 3.7030049853137306 Thought: I now know the final answer. Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306. > Finished chain. "Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306."mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?") > Entering new AgentExecutor chain... I need to find out the artist's full name and then search the FooBar database for their albums. Action: Search Action Input: "The Storm Before the Calm" artist Observation: The Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis Morissette, released June 17, 2022, via Epiphany Music and Thirty Tigers, as well as by RCA Records in Europe. Thought: I now need to search the FooBar database for Alanis Morissette's albums. Action: FooBar DB Action Input: What albums by Alanis Morissette are in the FooBar database? > Entering new SQLDatabaseChain chain... What albums by Alanis Morissette are in the FooBar database? SQLQuery: /Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on
2,386
storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT "Title" FROM "Album" INNER JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId" WHERE "Name" = 'Alanis Morissette' LIMIT 5; SQLResult: [('Jagged Little Pill',)] Answer: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill. > Finished chain. Observation: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill. Thought: I now know the final answer. Final Answer: The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill. > Finished chain. "The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill."With a chat model‚Äãfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0)llm1 = OpenAI(temperature=0)search = SerpAPIWrapper()llm_math_chain = LLMMathChain(llm=llm1, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm1, db, verbose=True)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), Tool( name="FooBar DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context" )]mrkl = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") >
This walkthrough demonstrates how to replicate the MRKL system using agents.
This walkthrough demonstrates how to replicate the MRKL system using agents. ->: storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT "Title" FROM "Album" INNER JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId" WHERE "Name" = 'Alanis Morissette' LIMIT 5; SQLResult: [('Jagged Little Pill',)] Answer: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill. > Finished chain. Observation: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill. Thought: I now know the final answer. Final Answer: The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill. > Finished chain. "The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill."With a chat model‚Äãfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0)llm1 = OpenAI(temperature=0)search = SerpAPIWrapper()llm_math_chain = LLMMathChain(llm=llm1, verbose=True)db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")db_chain = SQLDatabaseChain.from_llm(llm1, db, verbose=True)tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), Tool( name="FooBar DB", func=db_chain.run, description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context" )]mrkl = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") >
2,387
her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... Thought: The first question requires a search, while the second question requires a calculator. Action: ``` { "action": "Search", "action_input": "Leo DiCaprio girlfriend" } ``` Observation: Gigi Hadid: 2022 Leo and Gigi were first linked back in September 2022, when a source told Us Weekly that Leo had his “sights set" on her (alarming way to put it, but okay). Thought:For the second question, I need to calculate the age raised to the 0.43 power. I will use the calculator tool. Action: ``` { "action": "Calculator", "action_input": "((2022-1995)^0.43)" } ``` > Entering new LLMMathChain chain... ((2022-1995)^0.43) ```text (2022-1995)**0.43 ``` ...numexpr.evaluate("(2022-1995)**0.43")... Answer: 4.125593352125936 > Finished chain. Observation: Answer: 4.125593352125936 Thought:I now know the final answer. Final Answer: Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13. > Finished chain. "Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13."mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?") > Entering new AgentExecutor chain... Question: What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database? Thought: I should use the Search tool to find the answer to the first part of the question and then use the FooBar DB tool to find the answer to the second part. Action: ``` { "action": "Search", "action_input": "Who recently released an album called 'The Storm Before
This walkthrough demonstrates how to replicate the MRKL system using agents.
This walkthrough demonstrates how to replicate the MRKL system using agents. ->: her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... Thought: The first question requires a search, while the second question requires a calculator. Action: ``` { "action": "Search", "action_input": "Leo DiCaprio girlfriend" } ``` Observation: Gigi Hadid: 2022 Leo and Gigi were first linked back in September 2022, when a source told Us Weekly that Leo had his “sights set" on her (alarming way to put it, but okay). Thought:For the second question, I need to calculate the age raised to the 0.43 power. I will use the calculator tool. Action: ``` { "action": "Calculator", "action_input": "((2022-1995)^0.43)" } ``` > Entering new LLMMathChain chain... ((2022-1995)^0.43) ```text (2022-1995)**0.43 ``` ...numexpr.evaluate("(2022-1995)**0.43")... Answer: 4.125593352125936 > Finished chain. Observation: Answer: 4.125593352125936 Thought:I now know the final answer. Final Answer: Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13. > Finished chain. "Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13."mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?") > Entering new AgentExecutor chain... Question: What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database? Thought: I should use the Search tool to find the answer to the first part of the question and then use the FooBar DB tool to find the answer to the second part. Action: ``` { "action": "Search", "action_input": "Who recently released an album called 'The Storm Before
2,388
released an album called 'The Storm Before the Calm'" } ``` Observation: Alanis Morissette Thought:Now that I know the artist's name, I can use the FooBar DB tool to find out if they are in the database and what albums of theirs are in it. Action: ``` { "action": "FooBar DB", "action_input": "What albums does Alanis Morissette have in the database?" } ``` > Entering new SQLDatabaseChain chain... What albums does Alanis Morissette have in the database? SQLQuery: /Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT "Title" FROM "Album" WHERE "ArtistId" IN (SELECT "ArtistId" FROM "Artist" WHERE "Name" = 'Alanis Morissette') LIMIT 5; SQLResult: [('Jagged Little Pill',)] Answer: Alanis Morissette has the album Jagged Little Pill in the database. > Finished chain. Observation: Alanis Morissette has the album Jagged Little Pill in the database. Thought:The artist Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it. Final Answer: Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it. > Finished chain. 'Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.'PreviousTimeouts for agentsNextShared memory across agents and toolsWith a chat modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This walkthrough demonstrates how to replicate the MRKL system using agents.
This walkthrough demonstrates how to replicate the MRKL system using agents. ->: released an album called 'The Storm Before the Calm'" } ``` Observation: Alanis Morissette Thought:Now that I know the artist's name, I can use the FooBar DB tool to find out if they are in the database and what albums of theirs are in it. Action: ``` { "action": "FooBar DB", "action_input": "What albums does Alanis Morissette have in the database?" } ``` > Entering new SQLDatabaseChain chain... What albums does Alanis Morissette have in the database? SQLQuery: /Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT "Title" FROM "Album" WHERE "ArtistId" IN (SELECT "ArtistId" FROM "Artist" WHERE "Name" = 'Alanis Morissette') LIMIT 5; SQLResult: [('Jagged Little Pill',)] Answer: Alanis Morissette has the album Jagged Little Pill in the database. > Finished chain. Observation: Alanis Morissette has the album Jagged Little Pill in the database. Thought:The artist Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it. Final Answer: Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it. > Finished chain. 'Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.'PreviousTimeouts for agentsNextShared memory across agents and toolsWith a chat modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,389
Handle parsing errors | 🦜️🔗 Langchain
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how. ->: Handle parsing errors | 🦜️🔗 Langchain
2,390
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toHandle parsing errorsOn this pageHandle parsing errorsOccasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handle_parsing_errors! Let's explore how.Setup​from langchain.llms import OpenAIfrom langchain.chains import LLMMathChainfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.agents.types import AGENT_TO_CLASSsearch = SerpAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions", ),]Error​In this scenario, the agent will error (because it fails to output an Action string)mrkl =
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toHandle parsing errorsOn this pageHandle parsing errorsOccasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handle_parsing_errors! Let's explore how.Setup​from langchain.llms import OpenAIfrom langchain.chains import LLMMathChainfrom langchain.utilities import SerpAPIWrapperfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.agents.types import AGENT_TO_CLASSsearch = SerpAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions", ),]Error​In this scenario, the agent will error (because it fails to output an Action string)mrkl =
2,391
it fails to output an Action string)mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... --------------------------------------------------------------------------- IndexError Traceback (most recent call last) File ~/workplace/langchain/langchain/agents/chat/output_parser.py:21, in ChatOutputParser.parse(self, text) 20 try: ---> 21 action = text.split("```")[1] 22 response = json.loads(action.strip()) IndexError: list index out of range During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[4], line 1 ----> 1 mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") File ~/workplace/langchain/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs) 234 if len(args) != 1: 235 raise ValueError("`run` supports only one positional argument.") --> 236 return self(args[0], callbacks=callbacks)[self.output_keys[0]] 238 if kwargs and not args: 239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]] File ~/workplace/langchain/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) --> 140 raise e 141 run_manager.on_chain_end(outputs) 142 return self.prep_outputs(inputs, outputs, return_only_outputs) File ~/workplace/langchain/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 128 run_manager = callback_manager.on_chain_start( 129 {"name": self.__class__.__name__}, 130
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how. ->: it fails to output an Action string)mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... --------------------------------------------------------------------------- IndexError Traceback (most recent call last) File ~/workplace/langchain/langchain/agents/chat/output_parser.py:21, in ChatOutputParser.parse(self, text) 20 try: ---> 21 action = text.split("```")[1] 22 response = json.loads(action.strip()) IndexError: list index out of range During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[4], line 1 ----> 1 mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") File ~/workplace/langchain/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs) 234 if len(args) != 1: 235 raise ValueError("`run` supports only one positional argument.") --> 236 return self(args[0], callbacks=callbacks)[self.output_keys[0]] 238 if kwargs and not args: 239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]] File ~/workplace/langchain/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) --> 140 raise e 141 run_manager.on_chain_end(outputs) 142 return self.prep_outputs(inputs, outputs, return_only_outputs) File ~/workplace/langchain/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 128 run_manager = callback_manager.on_chain_start( 129 {"name": self.__class__.__name__}, 130
2,392
{"name": self.__class__.__name__}, 130 inputs, 131 ) 132 try: 133 outputs = ( --> 134 self._call(inputs, run_manager=run_manager) 135 if new_arg_supported 136 else self._call(inputs) 137 ) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) File ~/workplace/langchain/langchain/agents/agent.py:947, in AgentExecutor._call(self, inputs, run_manager) 945 # We now enter the agent loop (until it returns something). 946 while self._should_continue(iterations, time_elapsed): --> 947 next_step_output = self._take_next_step( 948 name_to_tool_map, 949 color_mapping, 950 inputs, 951 intermediate_steps, 952 run_manager=run_manager, 953 ) 954 if isinstance(next_step_output, AgentFinish): 955 return self._return( 956 next_step_output, intermediate_steps, run_manager=run_manager 957 ) File ~/workplace/langchain/langchain/agents/agent.py:773, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 771 raise_error = False 772 if raise_error: --> 773 raise e 774 text = str(e) 775 if isinstance(self.handle_parsing_errors, bool): File ~/workplace/langchain/langchain/agents/agent.py:762, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 756 """Take a single step in the thought-action-observation loop. 757 758 Override this to take control of how the agent makes and acts on choices. 759 """ 760 try: 761 # Call the LLM to see what to do. --> 762 output = self.agent.plan( 763 intermediate_steps, 764 callbacks=run_manager.get_child() if
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how. ->: {"name": self.__class__.__name__}, 130 inputs, 131 ) 132 try: 133 outputs = ( --> 134 self._call(inputs, run_manager=run_manager) 135 if new_arg_supported 136 else self._call(inputs) 137 ) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) File ~/workplace/langchain/langchain/agents/agent.py:947, in AgentExecutor._call(self, inputs, run_manager) 945 # We now enter the agent loop (until it returns something). 946 while self._should_continue(iterations, time_elapsed): --> 947 next_step_output = self._take_next_step( 948 name_to_tool_map, 949 color_mapping, 950 inputs, 951 intermediate_steps, 952 run_manager=run_manager, 953 ) 954 if isinstance(next_step_output, AgentFinish): 955 return self._return( 956 next_step_output, intermediate_steps, run_manager=run_manager 957 ) File ~/workplace/langchain/langchain/agents/agent.py:773, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 771 raise_error = False 772 if raise_error: --> 773 raise e 774 text = str(e) 775 if isinstance(self.handle_parsing_errors, bool): File ~/workplace/langchain/langchain/agents/agent.py:762, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 756 """Take a single step in the thought-action-observation loop. 757 758 Override this to take control of how the agent makes and acts on choices. 759 """ 760 try: 761 # Call the LLM to see what to do. --> 762 output = self.agent.plan( 763 intermediate_steps, 764 callbacks=run_manager.get_child() if
2,393
764 callbacks=run_manager.get_child() if run_manager else None, 765 **inputs, 766 ) 767 except OutputParserException as e: 768 if isinstance(self.handle_parsing_errors, bool): File ~/workplace/langchain/langchain/agents/agent.py:444, in Agent.plan(self, intermediate_steps, callbacks, **kwargs) 442 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs) 443 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs) --> 444 return self.output_parser.parse(full_output) File ~/workplace/langchain/langchain/agents/chat/output_parser.py:26, in ChatOutputParser.parse(self, text) 23 return AgentAction(response["action"], response["action_input"], text) 25 except Exception: ---> 26 raise OutputParserException(f"Could not parse LLM output: {text}") OutputParserException: Could not parse LLM output: I'm sorry, but I cannot provide an answer without an Action. Please provide a valid Action in the format specified above.Default error handling​Handle errors with Invalid or incomplete response:mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=True,)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... Observation: Invalid or incomplete response Thought: Observation: Invalid or incomplete response Thought:Search for Leo DiCaprio's current girlfriend Action: ``` { "action": "Search", "action_input": "Leo DiCaprio current girlfriend" } ``` Observation: Just Jared on Instagram: “Leonardo DiCaprio & girlfriend Camila Morrone couple up for a lunch date! Thought:Camila Morrone is currently Leo DiCaprio's girlfriend Final Answer: Camila Morrone > Finished chain. 'Camila Morrone'Custom error message​You can easily customize
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how. ->: 764 callbacks=run_manager.get_child() if run_manager else None, 765 **inputs, 766 ) 767 except OutputParserException as e: 768 if isinstance(self.handle_parsing_errors, bool): File ~/workplace/langchain/langchain/agents/agent.py:444, in Agent.plan(self, intermediate_steps, callbacks, **kwargs) 442 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs) 443 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs) --> 444 return self.output_parser.parse(full_output) File ~/workplace/langchain/langchain/agents/chat/output_parser.py:26, in ChatOutputParser.parse(self, text) 23 return AgentAction(response["action"], response["action_input"], text) 25 except Exception: ---> 26 raise OutputParserException(f"Could not parse LLM output: {text}") OutputParserException: Could not parse LLM output: I'm sorry, but I cannot provide an answer without an Action. Please provide a valid Action in the format specified above.Default error handling​Handle errors with Invalid or incomplete response:mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=True,)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... Observation: Invalid or incomplete response Thought: Observation: Invalid or incomplete response Thought:Search for Leo DiCaprio's current girlfriend Action: ``` { "action": "Search", "action_input": "Leo DiCaprio current girlfriend" } ``` Observation: Just Jared on Instagram: “Leonardo DiCaprio & girlfriend Camila Morrone couple up for a lunch date! Thought:Camila Morrone is currently Leo DiCaprio's girlfriend Final Answer: Camila Morrone > Finished chain. 'Camila Morrone'Custom error message​You can easily customize
2,394
error message​You can easily customize the message to use when there are parsing errors.mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors="Check your output and make sure it conforms!",)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... Observation: Could not parse LLM output: I'm sorry, but I canno Thought:I need to use the Search tool to find the answer to the question. Action: ``` { "action": "Search", "action_input": "Who is Leo DiCaprio's girlfriend?" } ``` Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week. Thought:The answer to the question is that Leo DiCaprio's current girlfriend is Gigi Hadid. Final Answer: Gigi Hadid. > Finished chain. 'Gigi Hadid.'Custom Error Function​You can also customize the error to be a function that takes the error in and outputs a string.def _handle_error(error) -> str: return str(error)[:50]mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=_handle_error,)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... Observation: Could not parse LLM output: I'm sorry, but I canno Thought:I need to use the Search tool to find the answer to the question. Action: ``` { "action": "Search", "action_input": "Who is Leo DiCaprio's girlfriend?" } ``` Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how. ->: error message​You can easily customize the message to use when there are parsing errors.mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors="Check your output and make sure it conforms!",)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... Observation: Could not parse LLM output: I'm sorry, but I canno Thought:I need to use the Search tool to find the answer to the question. Action: ``` { "action": "Search", "action_input": "Who is Leo DiCaprio's girlfriend?" } ``` Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week. Thought:The answer to the question is that Leo DiCaprio's current girlfriend is Gigi Hadid. Final Answer: Gigi Hadid. > Finished chain. 'Gigi Hadid.'Custom Error Function​You can also customize the error to be a function that takes the error in and outputs a string.def _handle_error(error) -> str: return str(error)[:50]mrkl = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=_handle_error,)mrkl.run("Who is Leo DiCaprio's girlfriend? No need to add Action") > Entering new AgentExecutor chain... Observation: Could not parse LLM output: I'm sorry, but I canno Thought:I need to use the Search tool to find the answer to the question. Action: ``` { "action": "Search", "action_input": "Who is Leo DiCaprio's girlfriend?" } ``` Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of
2,395
girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week. Thought:The current girlfriend of Leonardo DiCaprio is Gigi Hadid. Final Answer: Gigi Hadid. > Finished chain. 'Gigi Hadid.'PreviousCustom multi-action agentNextAccess intermediate stepsSetupErrorDefault error handlingCustom error messageCustom Error FunctionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how.
Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with handleparsingerrors! Let's explore how. ->: girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week. Thought:The current girlfriend of Leonardo DiCaprio is Gigi Hadid. Final Answer: Gigi Hadid. > Finished chain. 'Gigi Hadid.'PreviousCustom multi-action agentNextAccess intermediate stepsSetupErrorDefault error handlingCustom error messageCustom Error FunctionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,396
Custom agent with tool retrieval | 🦜️🔗 Langchain
This notebook builds off of this notebook and assumes familiarity with how agents work.
This notebook builds off of this notebook and assumes familiarity with how agents work. ->: Custom agent with tool retrieval | 🦜️🔗 Langchain
2,397
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom agent with tool retrievalOn this pageCustom agent with tool retrievalThis notebook builds off of this notebook and assumes familiarity with how agents work.The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.In this notebook we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.Set up environment​Do necessary imports, etc.from langchain.agents import ( Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser,)from langchain.prompts import StringPromptTemplatefrom langchain.llms import OpenAIfrom
This notebook builds off of this notebook and assumes familiarity with how agents work.
This notebook builds off of this notebook and assumes familiarity with how agents work. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toAdd Memory to OpenAI Functions AgentRunning Agent as an IteratorReturning Structured OutputCombine agents and vector storesAsync APICreate ChatGPT cloneCustom functions with OpenAI Functions AgentCustom agentCustom agent with tool retrievalCustom LLM agentCustom LLM Agent (with a ChatModel)Custom MRKL agentCustom multi-action agentHandle parsing errorsAccess intermediate stepsCap the max number of iterationsTimeouts for agentsReplicating MRKLShared memory across agents and toolsStreaming final agent outputUse ToolKits with OpenAI FunctionsToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsHow-toCustom agent with tool retrievalOn this pageCustom agent with tool retrievalThis notebook builds off of this notebook and assumes familiarity with how agents work.The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.In this notebook we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.Set up environment​Do necessary imports, etc.from langchain.agents import ( Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser,)from langchain.prompts import StringPromptTemplatefrom langchain.llms import OpenAIfrom
2,398
langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.chains import LLMChainfrom typing import List, Unionfrom langchain.schema import AgentAction, AgentFinishimport reSet up tools‚ÄãWe will create one legitimate tool (search) and then 99 fake tools.# Define which tools the agent can use to answer user queriessearch = SerpAPIWrapper()search_tool = Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events",)def fake_func(inp: str) -> str: return "foo"fake_tools = [ Tool( name=f"foo-{i}", func=fake_func, description=f"a silly function that you can use to get more information about the number {i}", ) for i in range(99)]ALL_TOOLS = [search_tool] + fake_toolsTool Retriever‚ÄãWe will use a vector store to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools.from langchain.vectorstores import FAISSfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.schema import Documentdocs = [ Document(page_content=t.description, metadata={"index": i}) for i, t in enumerate(ALL_TOOLS)]vector_store = FAISS.from_documents(docs, OpenAIEmbeddings())retriever = vector_store.as_retriever()def get_tools(query): docs = retriever.get_relevant_documents(query) return [ALL_TOOLS[d.metadata["index"]] for d in docs]We can now test this retriever to see if it seems to work.get_tools("whats the weather?") [Tool(name='Search', description='useful for when you need to answer questions about current events', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<bound method SerpAPIWrapper.run of SerpAPIWrapper(search_engine=<class 'serpapi.google_search.GoogleSearch'>, params={'engine': 'google', 'google_domain': 'google.com', 'gl': 'us', 'hl': 'en'},
This notebook builds off of this notebook and assumes familiarity with how agents work.
This notebook builds off of this notebook and assumes familiarity with how agents work. ->: langchain.llms import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.chains import LLMChainfrom typing import List, Unionfrom langchain.schema import AgentAction, AgentFinishimport reSet up tools‚ÄãWe will create one legitimate tool (search) and then 99 fake tools.# Define which tools the agent can use to answer user queriessearch = SerpAPIWrapper()search_tool = Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events",)def fake_func(inp: str) -> str: return "foo"fake_tools = [ Tool( name=f"foo-{i}", func=fake_func, description=f"a silly function that you can use to get more information about the number {i}", ) for i in range(99)]ALL_TOOLS = [search_tool] + fake_toolsTool Retriever‚ÄãWe will use a vector store to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools.from langchain.vectorstores import FAISSfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.schema import Documentdocs = [ Document(page_content=t.description, metadata={"index": i}) for i, t in enumerate(ALL_TOOLS)]vector_store = FAISS.from_documents(docs, OpenAIEmbeddings())retriever = vector_store.as_retriever()def get_tools(query): docs = retriever.get_relevant_documents(query) return [ALL_TOOLS[d.metadata["index"]] for d in docs]We can now test this retriever to see if it seems to work.get_tools("whats the weather?") [Tool(name='Search', description='useful for when you need to answer questions about current events', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<bound method SerpAPIWrapper.run of SerpAPIWrapper(search_engine=<class 'serpapi.google_search.GoogleSearch'>, params={'engine': 'google', 'google_domain': 'google.com', 'gl': 'us', 'hl': 'en'},
2,399
'google.com', 'gl': 'us', 'hl': 'en'}, serpapi_api_key='', aiosession=None)>, coroutine=None), Tool(name='foo-95', description='a silly function that you can use to get more information about the number 95', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-15', description='a silly function that you can use to get more information about the number 15', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)]get_tools("whats the number 13?") [Tool(name='foo-13', description='a silly function that you can use to get more information about the number 13', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-14', description='a silly function that you can use to get more information about the number 14', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-11', description='a silly function
This notebook builds off of this notebook and assumes familiarity with how agents work.
This notebook builds off of this notebook and assumes familiarity with how agents work. ->: 'google.com', 'gl': 'us', 'hl': 'en'}, serpapi_api_key='', aiosession=None)>, coroutine=None), Tool(name='foo-95', description='a silly function that you can use to get more information about the number 95', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-15', description='a silly function that you can use to get more information about the number 15', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)]get_tools("whats the number 13?") [Tool(name='foo-13', description='a silly function that you can use to get more information about the number 13', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-14', description='a silly function that you can use to get more information about the number 14', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None), Tool(name='foo-11', description='a silly function