Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
2,200 | Vectara | ü¶úÔ∏èüîó Langchain | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: Vectara | ü¶úÔ∏èüîó Langchain |
2,201 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverVectaraOn this pageVectaraVectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverVectaraOn this pageVectaraVectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation |
2,202 | (aka Retrieval-augmented-generation or RAG) applications.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Vectara vector store. SetupYou will need a Vectara account to use Vectara with LangChain. To get started, use the following steps (see our quickstart guide):Sign up for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the "Create Corpus" button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.Next you'll need to create API keys to access the corpus. Click on the "Authorization" tab in the corpus view and then the "Create API Key" button. Give your key a name, and choose whether you want query only or query+index for your key. Click "Create" and you now have an active API key. Keep this key confidential. To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key. | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: (aka Retrieval-augmented-generation or RAG) applications.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Vectara vector store. SetupYou will need a Vectara account to use Vectara with LangChain. To get started, use the following steps (see our quickstart guide):Sign up for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the "Create Corpus" button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.Next you'll need to create API keys to access the corpus. Click on the "Authorization" tab in the corpus view and then the "Create API Key" button. Give your key a name, and choose whether you want query only or query+index for your key. Click "Create" and you now have an active API key. Keep this key confidential. To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key. |
2,203 | You can provide those to LangChain in two ways:Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")Provide them as arguments when creating the Vectara vectorstore object:vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )Note: The self-query retriever requires you to have lark installed (pip install lark). Connecting to Vectara from LangChain‚ÄãIn this example, we assume that you've created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.The corpus has 4 fields defined as metadata for filtering: year, director, rating, and genrefrom langchain.embeddings import FakeEmbeddingsfrom langchain.schema import Documentfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Vectarafrom langchain.document_loaders import TextLoaderfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfodocs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: You can provide those to LangChain in two ways:Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")Provide them as arguments when creating the Vectara vectorstore object:vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )Note: The self-query retriever requires you to have lark installed (pip install lark). Connecting to Vectara from LangChain‚ÄãIn this example, we assume that you've created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.The corpus has 4 fields defined as metadata for filtering: year, director, rating, and genrefrom langchain.embeddings import FakeEmbeddingsfrom langchain.schema import Documentfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Vectarafrom langchain.document_loaders import TextLoaderfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfodocs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher |
2,204 | metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectara = Vectara()for doc in docs: vectara.add_texts([doc.page_content], embedding=FakeEmbeddings(size=768), doc_metadata=doc.metadata)Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectara = Vectara()for doc in docs: vectara.add_texts([doc.page_content], embedding=FakeEmbeddings(size=768), doc_metadata=doc.metadata)Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary |
2,205 | ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectara, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /Users/ofer/dev/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'lang': 'eng', 'offset': '0', 'len': '76', 'year': '2010', 'director': 'Christopher Nolan', 'rating': '8.2', 'source': 'langchain'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectara, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /Users/ofer/dev/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'lang': 'eng', 'offset': '0', 'len': '76', 'year': '2010', 'director': 'Christopher Nolan', 'rating': '8.2', 'source': 'langchain'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' |
2,206 | a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, |
2,207 | after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectara, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]PreviousTimescale Vector (Postgres) self-queryingNextWeaviateConnecting to Vectara from LangChainCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectara, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]PreviousTimescale Vector (Postgres) self-queryingNextWeaviateConnecting to Vectara from LangChainCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,208 | DocArray | ü¶úÔ∏èüîó Langchain | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: DocArray | ü¶úÔ∏èüîó Langchain |
2,209 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversDocArrayOn this pageDocArrayDocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!This notebook is split into two sections. The first section offers an introduction to all five supported document index backends. It provides guidance on setting up and indexing each backend and also instructs you on how to build a DocArrayRetriever for finding relevant documents. | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversDocArrayOn this pageDocArrayDocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!This notebook is split into two sections. The first section offers an introduction to all five supported document index backends. It provides guidance on setting up and indexing each backend and also instructs you on how to build a DocArrayRetriever for finding relevant documents. |
2,210 | In the second section, we'll select one of these backends and illustrate how to use it through a basic example.Document Index Backends‚Äãfrom langchain.retrievers import DocArrayRetrieverfrom docarray import BaseDocfrom docarray.typing import NdArrayimport numpy as npfrom langchain.embeddings import FakeEmbeddingsimport randomembeddings = FakeEmbeddings(size=32)Before you start building the index, it's important to define your document schema. This determines what fields your documents will have and what type of data each field will hold.For this demonstration, we'll create a somewhat random schema containing 'title' (str), 'title_embedding' (numpy array), 'year' (int), and 'color' (str)class MyDoc(BaseDoc): title: str title_embedding: NdArray[32] year: int color: strInMemoryExactNNIndex‚ÄãInMemoryExactNNIndex stores all Documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.Learn more here: https://docs.docarray.org/user_guide/storing/index_in_memory/from docarray.index import InMemoryExactNNIndex# initialize the indexdb = InMemoryExactNNIndex[MyDoc]()# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 56', metadata={'id': '1f33e58b6468ab722f3786b96b20afe6', 'year': 56, 'color': 'red'})]HnswDocumentIndex‚ÄãHnswDocumentIndex is a lightweight Document Index implementation that runs fully locally and is | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: In the second section, we'll select one of these backends and illustrate how to use it through a basic example.Document Index Backends‚Äãfrom langchain.retrievers import DocArrayRetrieverfrom docarray import BaseDocfrom docarray.typing import NdArrayimport numpy as npfrom langchain.embeddings import FakeEmbeddingsimport randomembeddings = FakeEmbeddings(size=32)Before you start building the index, it's important to define your document schema. This determines what fields your documents will have and what type of data each field will hold.For this demonstration, we'll create a somewhat random schema containing 'title' (str), 'title_embedding' (numpy array), 'year' (int), and 'color' (str)class MyDoc(BaseDoc): title: str title_embedding: NdArray[32] year: int color: strInMemoryExactNNIndex‚ÄãInMemoryExactNNIndex stores all Documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.Learn more here: https://docs.docarray.org/user_guide/storing/index_in_memory/from docarray.index import InMemoryExactNNIndex# initialize the indexdb = InMemoryExactNNIndex[MyDoc]()# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 56', metadata={'id': '1f33e58b6468ab722f3786b96b20afe6', 'year': 56, 'color': 'red'})]HnswDocumentIndex‚ÄãHnswDocumentIndex is a lightweight Document Index implementation that runs fully locally and is |
2,211 | implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.Learn more here: https://docs.docarray.org/user_guide/storing/index_hnswlib/from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="hnsw_index")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})]WeaviateDocumentIndex‚ÄãWeaviateDocumentIndex is a document index that is built upon Weaviate vector database.Learn more here: https://docs.docarray.org/user_guide/storing/index_weaviate/# There's a small difference with the Weaviate backend compared to the others.# Here, you need to 'mark' the field used for vector search with 'is_embedding=True'.# So, let's create a new schema for Weaviate that takes care of this requirement.from pydantic import Fieldclass WeaviateDoc(BaseDoc): title: str title_embedding: NdArray[32] = Field(is_embedding=True) year: int color: strfrom docarray.index import WeaviateDocumentIndex# initialize the indexdbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080")db = WeaviateDocumentIndex[WeaviateDoc](db_config=dbconfig)# index datadb.index( [ MyDoc( title=f"My document {i}", | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.Learn more here: https://docs.docarray.org/user_guide/storing/index_hnswlib/from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="hnsw_index")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})]WeaviateDocumentIndex‚ÄãWeaviateDocumentIndex is a document index that is built upon Weaviate vector database.Learn more here: https://docs.docarray.org/user_guide/storing/index_weaviate/# There's a small difference with the Weaviate backend compared to the others.# Here, you need to 'mark' the field used for vector search with 'is_embedding=True'.# So, let's create a new schema for Weaviate that takes care of this requirement.from pydantic import Fieldclass WeaviateDoc(BaseDoc): title: str title_embedding: NdArray[32] = Field(is_embedding=True) year: int color: strfrom docarray.index import WeaviateDocumentIndex# initialize the indexdbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080")db = WeaviateDocumentIndex[WeaviateDoc](db_config=dbconfig)# index datadb.index( [ MyDoc( title=f"My document {i}", |
2,212 | title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"path": ["year"], "operator": "LessThanEqual", "valueInt": "90"}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})]ElasticDocIndex‚ÄãElasticDocIndex is a document index that is built upon ElasticSearchLearn more herefrom docarray.index import ElasticDocIndex# initialize the indexdb = ElasticDocIndex[MyDoc]( hosts="http://localhost:9200", index_name="docarray_retriever")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"range": {"year": {"lte": 90}}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 46', metadata={'id': 'edbc721bac1c2ad323414ad1301528a4', 'year': 46, 'color': 'green'})]QdrantDocumentIndex‚ÄãQdrantDocumentIndex is a document index that is built upon Qdrant vector databaseLearn more herefrom docarray.index import QdrantDocumentIndexfrom qdrant_client.http import models as rest# initialize the indexqdrant_config = | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"path": ["year"], "operator": "LessThanEqual", "valueInt": "90"}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})]ElasticDocIndex‚ÄãElasticDocIndex is a document index that is built upon ElasticSearchLearn more herefrom docarray.index import ElasticDocIndex# initialize the indexdb = ElasticDocIndex[MyDoc]( hosts="http://localhost:9200", index_name="docarray_retriever")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"range": {"year": {"lte": 90}}}# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 46', metadata={'id': 'edbc721bac1c2ad323414ad1301528a4', 'year': 46, 'color': 'green'})]QdrantDocumentIndex‚ÄãQdrantDocumentIndex is a document index that is built upon Qdrant vector databaseLearn more herefrom docarray.index import QdrantDocumentIndexfrom qdrant_client.http import models as rest# initialize the indexqdrant_config = |
2,213 | as rest# initialize the indexqdrant_config = QdrantDocumentIndex.DBConfig(path=":memory:")db = QdrantDocumentIndex[MyDoc](qdrant_config)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = rest.Filter( must=[ rest.FieldCondition( key="year", range=rest.Range( gte=10, lt=90, ), ) ]) WARNING:root:Payload indexes have no effect in the local Qdrant. Please use server Qdrant if you need payload indexes.# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})]Movie Retrieval using HnswDocumentIndex‚Äãmovies = [ { "title": "Inception", "description": "A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.", "director": "Christopher Nolan", "rating": 8.8, }, { "title": "The Dark Knight", "description": "When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.", "director": "Christopher Nolan", "rating": 9.0, }, { "title": "Interstellar", "description": "Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: as rest# initialize the indexqdrant_config = QdrantDocumentIndex.DBConfig(path=":memory:")db = QdrantDocumentIndex[MyDoc](qdrant_config)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = rest.Filter( must=[ rest.FieldCondition( key="year", range=rest.Range( gte=10, lt=90, ), ) ]) WARNING:root:Payload indexes have no effect in the local Qdrant. Please use server Qdrant if you need payload indexes.# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc) [Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})]Movie Retrieval using HnswDocumentIndex‚Äãmovies = [ { "title": "Inception", "description": "A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.", "director": "Christopher Nolan", "rating": 8.8, }, { "title": "The Dark Knight", "description": "When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.", "director": "Christopher Nolan", "rating": 9.0, }, { "title": "Interstellar", "description": "Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure |
2,214 | a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.", "director": "Christopher Nolan", "rating": 8.6, }, { "title": "Pulp Fiction", "description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", "director": "Quentin Tarantino", "rating": 8.9, }, { "title": "Reservoir Dogs", "description": "When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that one of them is a police informant.", "director": "Quentin Tarantino", "rating": 8.3, }, { "title": "The Godfather", "description": "An aging patriarch of an organized crime dynasty transfers control of his empire to his reluctant son.", "director": "Francis Ford Coppola", "rating": 9.2, },]import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from docarray import BaseDoc, DocListfrom docarray.typing import NdArrayfrom langchain.embeddings.openai import OpenAIEmbeddings# define schema for your movie documentsclass MyDoc(BaseDoc): title: str description: str description_embedding: NdArray[1536] rating: float director: strembeddings = OpenAIEmbeddings()# get "description" embeddings, and create documentsdocs = DocList[MyDoc]( [ MyDoc( description_embedding=embeddings.embed_query(movie["description"]), **movie ) for movie in movies ])from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="movie_search")# add datadb.index(docs)Normal Retriever​from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.", "director": "Christopher Nolan", "rating": 8.6, }, { "title": "Pulp Fiction", "description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", "director": "Quentin Tarantino", "rating": 8.9, }, { "title": "Reservoir Dogs", "description": "When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that one of them is a police informant.", "director": "Quentin Tarantino", "rating": 8.3, }, { "title": "The Godfather", "description": "An aging patriarch of an organized crime dynasty transfers control of his empire to his reluctant son.", "director": "Francis Ford Coppola", "rating": 9.2, },]import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from docarray import BaseDoc, DocListfrom docarray.typing import NdArrayfrom langchain.embeddings.openai import OpenAIEmbeddings# define schema for your movie documentsclass MyDoc(BaseDoc): title: str description: str description_embedding: NdArray[1536] rating: float director: strembeddings = OpenAIEmbeddings()# get "description" embeddings, and create documentsdocs = DocList[MyDoc]( [ MyDoc( description_embedding=embeddings.embed_query(movie["description"]), **movie ) for movie in movies ])from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="movie_search")# add datadb.index(docs)Normal Retriever​from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, |
2,215 | index=db, embeddings=embeddings, search_field="description_embedding", content_field="description",)# find the relevant documentdoc = retriever.get_relevant_documents("movie about dreams")print(doc) [Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with Filters‚Äãfrom langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"director": {"$eq": "Christopher Nolan"}}, top_k=2,)# find relevant documentsdocs = retriever.get_relevant_documents("space travel")print(docs) [Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.', metadata={'id': 'ab704cc7ae8573dc617f9a5e25df022a', 'title': 'Interstellar', 'rating': 8.6, 'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with MMR search‚Äãfrom langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"rating": {"$gte": 8.7}}, search_type="mmr", top_k=3,)# find relevant documentsdocs = retriever.get_relevant_documents("action | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: index=db, embeddings=embeddings, search_field="description_embedding", content_field="description",)# find the relevant documentdoc = retriever.get_relevant_documents("movie about dreams")print(doc) [Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with Filters‚Äãfrom langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"director": {"$eq": "Christopher Nolan"}}, top_k=2,)# find relevant documentsdocs = retriever.get_relevant_documents("space travel")print(docs) [Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.', metadata={'id': 'ab704cc7ae8573dc617f9a5e25df022a', 'title': 'Interstellar', 'rating': 8.6, 'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]Retriever with MMR search‚Äãfrom langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"rating": {"$gte": 8.7}}, search_type="mmr", top_k=3,)# find relevant documentsdocs = retriever.get_relevant_documents("action |
2,216 | = retriever.get_relevant_documents("action movies")print(docs) [Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': 'Pulp Fiction', 'rating': 8.9, 'director': 'Quentin Tarantino'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'}), Document(page_content='When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.', metadata={'id': '91dec17d4272041b669fd113333a65f7', 'title': 'The Dark Knight', 'rating': 9.0, 'director': 'Christopher Nolan'})]PreviousCohere RerankerNextElasticSearch BM25Document Index BackendsInMemoryExactNNIndexHnswDocumentIndexWeaviateDocumentIndexElasticDocIndexQdrantDocumentIndexMovie Retrieval using HnswDocumentIndexNormal RetrieverRetriever with FiltersRetriever with MMR searchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! | DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! ->: = retriever.get_relevant_documents("action movies")print(docs) [Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': 'Pulp Fiction', 'rating': 8.9, 'director': 'Quentin Tarantino'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'}), Document(page_content='When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.', metadata={'id': '91dec17d4272041b669fd113333a65f7', 'title': 'The Dark Knight', 'rating': 9.0, 'director': 'Christopher Nolan'})]PreviousCohere RerankerNextElasticSearch BM25Document Index BackendsInMemoryExactNNIndexHnswDocumentIndexWeaviateDocumentIndexElasticDocIndexQdrantDocumentIndexMovie Retrieval using HnswDocumentIndexNormal RetrieverRetriever with FiltersRetriever with MMR searchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,217 | ElasticSearch BM25 | ü¶úÔ∏èüîó Langchain | Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. | Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. ->: ElasticSearch BM25 | ü¶úÔ∏èüîó Langchain |
2,218 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversElasticSearch BM25On this pageElasticSearch BM25Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Sp√§rck Jones, and others.The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.This notebook shows how to use a retriever that uses ElasticSearch and BM25.For more information on the details of BM25 see this blog post.#!pip install elasticsearchfrom langchain.retrievers import ElasticSearchBM25RetrieverCreate New Retriever‚Äãelasticsearch_url = | Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. | Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversElasticSearch BM25On this pageElasticSearch BM25Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Sp√§rck Jones, and others.The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.This notebook shows how to use a retriever that uses ElasticSearch and BM25.For more information on the details of BM25 see this blog post.#!pip install elasticsearchfrom langchain.retrievers import ElasticSearchBM25RetrieverCreate New Retriever‚Äãelasticsearch_url = |
2,219 | New Retriever​elasticsearch_url = "http://localhost:9200"retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")# Alternatively, you can load an existing index# import elasticsearch# elasticsearch_url="http://localhost:9200"# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")Add texts (if necessary)​We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"]) ['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365', '8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7']Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]PreviousDocArrayNextGoogle Cloud Enterprise SearchCreate New RetrieverAdd texts (if necessary)Use RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. | Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. ->: New Retriever​elasticsearch_url = "http://localhost:9200"retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")# Alternatively, you can load an existing index# import elasticsearch# elasticsearch_url="http://localhost:9200"# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")Add texts (if necessary)​We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"]) ['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365', '8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7']Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]PreviousDocArrayNextGoogle Cloud Enterprise SearchCreate New RetrieverAdd texts (if necessary)Use RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,220 | ChatGPT Plugin | ü¶úÔ∏èüîó Langchain | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. ->: ChatGPT Plugin | ü¶úÔ∏èüîó Langchain |
2,221 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversChatGPT PluginOn this pageChatGPT PluginOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.Plugins can allow ChatGPT to do things like:Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.Retrieve knowledge-base information; e.g., company docs, personal notes, etc.Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.# STEP 1: Load# Load documents using LangChain's DocumentLoaders# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.htmlfrom langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader( file_path="../../document_loaders/examples/example_data/mlb_teams_2012.csv")data = loader.load()# STEP 2: Convert# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-pluginfrom typing import Listfrom langchain.docstore.document import Documentimport jsondef write_json(path: str, documents: List[Document]) -> None: results = [{"text": doc.page_content} for doc in | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversChatGPT PluginOn this pageChatGPT PluginOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.Plugins can allow ChatGPT to do things like:Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.Retrieve knowledge-base information; e.g., company docs, personal notes, etc.Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.# STEP 1: Load# Load documents using LangChain's DocumentLoaders# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.htmlfrom langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader( file_path="../../document_loaders/examples/example_data/mlb_teams_2012.csv")data = loader.load()# STEP 2: Convert# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-pluginfrom typing import Listfrom langchain.docstore.document import Documentimport jsondef write_json(path: str, documents: List[Document]) -> None: results = [{"text": doc.page_content} for doc in |
2,222 | results = [{"text": doc.page_content} for doc in documents] with open(path, "w") as f: json.dump(results, f, indent=2)write_json("foo.json", data)# STEP 3: Use# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_jsonUsing the ChatGPT Retriever Plugin​Okay, so we've created the ChatGPT Retriever Plugin, but how do we actually use it?The below code walks through how to do that.We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.retrievers import ChatGPTPluginRetrieverretriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")retriever.get_relevant_documents("alice's phone number") [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0), Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]PreviousChaindeskNextCohere RerankerUsing the ChatGPT Retriever | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. ->: results = [{"text": doc.page_content} for doc in documents] with open(path, "w") as f: json.dump(results, f, indent=2)write_json("foo.json", data)# STEP 3: Use# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_jsonUsing the ChatGPT Retriever Plugin​Okay, so we've created the ChatGPT Retriever Plugin, but how do we actually use it?The below code walks through how to do that.We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.retrievers import ChatGPTPluginRetrieverretriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")retriever.get_relevant_documents("alice's phone number") [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0), Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]PreviousChaindeskNextCohere RerankerUsing the ChatGPT Retriever |
2,223 | RerankerUsing the ChatGPT Retriever PluginCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. | OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. ->: RerankerUsing the ChatGPT Retriever PluginCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,224 | Self-querying retriever | 🦜�🔗 Langchain | Learn about how the self-querying retriever works here. | Learn about how the self-querying retriever works here. ->: Self-querying retriever | 🦜�🔗 Langchain |
2,225 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverSelf-querying retrieverLearn about how the self-querying retriever works here.📄� Deep LakeDeep Lake is a multimodal database for building AI applications📄� ChromaChroma is a database for building AI applications with embeddings.📄� DashVectorDashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.📄� ElasticsearchElasticsearch is a distributed, RESTful search and analytics engine.📄� MilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.📄� MyScaleMyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain.📄� OpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on | Learn about how the self-querying retriever works here. | Learn about how the self-querying retriever works here. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverSelf-querying retrieverLearn about how the self-querying retriever works here.📄� Deep LakeDeep Lake is a multimodal database for building AI applications📄� ChromaChroma is a database for building AI applications with embeddings.📄� DashVectorDashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.📄� ElasticsearchElasticsearch is a distributed, RESTful search and analytics engine.📄� MilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.📄� MyScaleMyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain.📄� OpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on |
2,226 | distributed search and analytics engine based on Apache Lucene.📄� PineconePinecone is a vector database with broad functionality.📄� QdrantQdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support.📄� RedisRedis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more.📄� SupabaseSupabase is an open-source Firebase alternative.📄� Timescale Vector (Postgres) self-queryingTimescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL.📄� VectaraVectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation📄� WeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings fromPreviousSEC filingNextDeep LakeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Learn about how the self-querying retriever works here. | Learn about how the self-querying retriever works here. ->: distributed search and analytics engine based on Apache Lucene.📄� PineconePinecone is a vector database with broad functionality.📄� QdrantQdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support.📄� RedisRedis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more.📄� SupabaseSupabase is an open-source Firebase alternative.📄� Timescale Vector (Postgres) self-queryingTimescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL.📄� VectaraVectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation📄� WeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings fromPreviousSEC filingNextDeep LakeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,227 | Google Cloud Enterprise Search | ü¶úÔ∏èüîó Langchain | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: Google Cloud Enterprise Search | ü¶úÔ∏èüîó Langchain |
2,228 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversGoogle Cloud Enterprise SearchOn this pageGoogle Cloud Enterprise SearchEnterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud.Gen AI App Builder lets developers, even those with limited machine learning skills, quickly and easily tap into the power of Google‚Äôs foundation models, search expertise, and conversational AI technologies to create enterprise-grade generative AI applications. Enterprise Search lets organizations quickly build generative AI powered search engines for customers and employees.Enterprise Search is underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user‚Äôs query input. Enterprise Search also benefits from Google‚Äôs expertise in understanding how users search and factors in content relevance to order displayed results. Google Cloud offers Enterprise Search via Gen App Builder in Google Cloud Console and via an API for enterprise workflow integration. This notebook demonstrates how to configure Enterprise Search and use the Enterprise Search | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversGoogle Cloud Enterprise SearchOn this pageGoogle Cloud Enterprise SearchEnterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud.Gen AI App Builder lets developers, even those with limited machine learning skills, quickly and easily tap into the power of Google‚Äôs foundation models, search expertise, and conversational AI technologies to create enterprise-grade generative AI applications. Enterprise Search lets organizations quickly build generative AI powered search engines for customers and employees.Enterprise Search is underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user‚Äôs query input. Enterprise Search also benefits from Google‚Äôs expertise in understanding how users search and factors in content relevance to order displayed results. Google Cloud offers Enterprise Search via Gen App Builder in Google Cloud Console and via an API for enterprise workflow integration. This notebook demonstrates how to configure Enterprise Search and use the Enterprise Search |
2,229 | Enterprise Search and use the Enterprise Search retriever. The Enterprise Search retriever encapsulates the Generative AI App Builder Python client library and uses it to access the Enterprise Search Search Service API.Install pre-requisites‚ÄãYou need to install the google-cloud-discoverengine package to use the Enterprise Search retriever.pip install google-cloud-discoveryengineConfigure access to Google Cloud and Google Cloud Enterprise Search‚ÄãEnterprise Search is generally available for the allowlist (which means customers need to be approved for access) as of June 6, 2023. Contact your Google Cloud sales team for access and pricing details. We are previewing additional features that are coming soon to the generally available offering as part of our Trusted Tester program. Sign up for Trusted Tester and contact your Google Cloud sales team for an expedited trial.Before you can run this notebook you need to:Set or create a Google Cloud project and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APISet or create a Google Cloud poject and turn on Gen App Builder‚ÄãFollow the instructions in the Enterprise Search Getting Started guide to set/create a GCP project and enable Gen App Builder.Create and populate an unstructured data store‚ÄãUse Google Cloud Console to create an unstructured data store and populate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder. Make sure to use the Cloud Storage (without metadata) option.Set credentials to access Enterprise Search API‚ÄãThe Gen App Builder client libraries used by the Enterprise Search retriever provide high-level language support for authenticating to Gen App Builder programmatically. Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: Enterprise Search and use the Enterprise Search retriever. The Enterprise Search retriever encapsulates the Generative AI App Builder Python client library and uses it to access the Enterprise Search Search Service API.Install pre-requisites‚ÄãYou need to install the google-cloud-discoverengine package to use the Enterprise Search retriever.pip install google-cloud-discoveryengineConfigure access to Google Cloud and Google Cloud Enterprise Search‚ÄãEnterprise Search is generally available for the allowlist (which means customers need to be approved for access) as of June 6, 2023. Contact your Google Cloud sales team for access and pricing details. We are previewing additional features that are coming soon to the generally available offering as part of our Trusted Tester program. Sign up for Trusted Tester and contact your Google Cloud sales team for an expedited trial.Before you can run this notebook you need to:Set or create a Google Cloud project and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APISet or create a Google Cloud poject and turn on Gen App Builder‚ÄãFollow the instructions in the Enterprise Search Getting Started guide to set/create a GCP project and enable Gen App Builder.Create and populate an unstructured data store‚ÄãUse Google Cloud Console to create an unstructured data store and populate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder. Make sure to use the Cloud Storage (without metadata) option.Set credentials to access Enterprise Search API‚ÄãThe Gen App Builder client libraries used by the Enterprise Search retriever provide high-level language support for authenticating to Gen App Builder programmatically. Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to |
2,230 | use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()Configure and use the Enterprise Search retriever‚ÄãThe Enterprise Search retriever is implemented in the langchain.retriever.GoogleCloudEntepriseSearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()Configure and use the Enterprise Search retriever‚ÄãThe Enterprise Search retriever is implemented in the langchain.retriever.GoogleCloudEntepriseSearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content. |
2,231 | Depending on the data type used in Enterprise search (structured or unstructured) the page_content field is populated as follows:Structured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.Unstructured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the document Only for Unstructured data sources:‚ÄãAn extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.For more information about extractive segments and extractive answers refer to product documentation.When creating an instance of the retriever you can specify a number of parameters that control which Enterprise data store to access and how a natural language query is processed, including configurations for extractive answers and segments.The mandatory parameters are:‚Äãproject_id - Your Google Cloud PROJECT_IDsearch_engine_id - The ID of the data store you want to use. The project_id and search_engine_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and SEARCH_ENGINE_ID.You can also configure a | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: Depending on the data type used in Enterprise search (structured or unstructured) the page_content field is populated as follows:Structured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.Unstructured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the document Only for Unstructured data sources:‚ÄãAn extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.For more information about extractive segments and extractive answers refer to product documentation.When creating an instance of the retriever you can specify a number of parameters that control which Enterprise data store to access and how a natural language query is processed, including configurations for extractive answers and segments.The mandatory parameters are:‚Äãproject_id - Your Google Cloud PROJECT_IDsearch_engine_id - The ID of the data store you want to use. The project_id and search_engine_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and SEARCH_ENGINE_ID.You can also configure a |
2,232 | and SEARCH_ENGINE_ID.You can also configure a number of optional parameters, including:max_documents - The maximum number of documents used to provide extractive segments or extractive answersget_extractive_answers - By default, the retriever is configured to return extractive segments. Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured) max_extractive_answer_count - The maximum number of extractive answers returned in each search result. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: and SEARCH_ENGINE_ID.You can also configure a number of optional parameters, including:max_documents - The maximum number of documents used to provide extractive segments or extractive answersget_extractive_answers - By default, the retriever is configured to return extractive segments. Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured) max_extractive_answer_count - The maximum number of extractive answers returned in each search result. |
2,233 | At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured) max_extractive_segment_count - The maximum number of extractive segments returned in each search result.
Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured) filter - The filter expression that allows you filter the search results based on the metadata associated with the documents in the searched data store. query_expansion_condition - Specification to determine under which conditions query expansion should occur.
0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled.
1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.
2 - Automatic query expansion built by the Search API.engine_data_type - Defines the enterprise search data type
0 - Unstructured data | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured) max_extractive_segment_count - The maximum number of extractive segments returned in each search result.
Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured) filter - The filter expression that allows you filter the search results based on the metadata associated with the documents in the searched data store. query_expansion_condition - Specification to determine under which conditions query expansion should occur.
0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled.
1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.
2 - Automatic query expansion built by the Search API.engine_data_type - Defines the enterprise search data type
0 - Unstructured data |
2,234 | 1 - Structured dataConfigure and use the retriever for unstructured data with extractve segments​from langchain.retrievers import GoogleCloudEnterpriseSearchRetrieverPROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDSEARCH_ENGINE_ID = "<YOUR SEARCH ENGINE ID>" # Set to your data store IDretriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for unstructured data with extractve answers​retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for structured data with extractve answers​retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, engine_data_type=1)result = retriever.get_relevant_documents(query)for doc in result: print(doc)PreviousElasticSearch BM25NextGoogle DriveInstall pre-requisitesConfigure access to Google Cloud and Google Cloud Enterprise SearchSet or create a Google Cloud poject and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APIConfigure and use the Enterprise Search retrieverOnly for Unstructured data sources:The mandatory parameters are:Configure and use the retriever for unstructured data with extractve segmentsConfigure and use the retriever for unstructured data with extractve answersConfigure and use the retriever for structured data with extractve answersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. | Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. ->: 1 - Structured dataConfigure and use the retriever for unstructured data with extractve segments​from langchain.retrievers import GoogleCloudEnterpriseSearchRetrieverPROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDSEARCH_ENGINE_ID = "<YOUR SEARCH ENGINE ID>" # Set to your data store IDretriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for unstructured data with extractve answers​retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for structured data with extractve answers​retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, engine_data_type=1)result = retriever.get_relevant_documents(query)for doc in result: print(doc)PreviousElasticSearch BM25NextGoogle DriveInstall pre-requisitesConfigure access to Google Cloud and Google Cloud Enterprise SearchSet or create a Google Cloud poject and turn on Gen App BuilderCreate and populate an unstructured data storeSet credentials to access Enterprise Search APIConfigure and use the Enterprise Search retrieverOnly for Unstructured data sources:The mandatory parameters are:Configure and use the retriever for unstructured data with extractve segmentsConfigure and use the retriever for unstructured data with extractve answersConfigure and use the retriever for structured data with extractve answersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,235 | Amazon Kendra | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversAmazon KendraOn this pageAmazon KendraAmazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.Using the Amazon Kendra Index Retriever‚Äã%pip install boto3import boto3from langchain.retrievers import AmazonKendraRetrieverCreate New Retrieverretriever = AmazonKendraRetriever(index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03")Now you can use retrieved documents from Kendra indexretriever.get_relevant_documents("what is langchain")PreviousRetrieversNextArcee RetrieverUsing the Amazon Kendra Index RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Amazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making. | Amazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making. ->: Amazon Kendra | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversAmazon KendraOn this pageAmazon KendraAmazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.Using the Amazon Kendra Index Retriever‚Äã%pip install boto3import boto3from langchain.retrievers import AmazonKendraRetrieverCreate New Retrieverretriever = AmazonKendraRetriever(index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03")Now you can use retrieved documents from Kendra indexretriever.get_relevant_documents("what is langchain")PreviousRetrieversNextArcee RetrieverUsing the Amazon Kendra Index RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,236 | kNN | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieverskNNOn this pagekNNIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.This notebook goes over how to use a retriever that under the hood uses an kNN.Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.htmlfrom langchain.retrievers import KNNRetrieverfrom langchain.embeddings import OpenAIEmbeddingsCreate New Retriever with Texts‚Äãretriever = KNNRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())Use Retriever‚ÄãWe can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='bar', metadata={})]PreviousKay.aiNextLOTR (Merger Retriever)Create New Retriever with TextsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. | In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. ->: kNN | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieverskNNOn this pagekNNIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.This notebook goes over how to use a retriever that under the hood uses an kNN.Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.htmlfrom langchain.retrievers import KNNRetrieverfrom langchain.embeddings import OpenAIEmbeddingsCreate New Retriever with Texts‚Äãretriever = KNNRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())Use Retriever‚ÄãWe can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='bar', metadata={})]PreviousKay.aiNextLOTR (Merger Retriever)Create New Retriever with TextsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,237 | Metal | ü¶úÔ∏èüîó Langchain | Metal is a managed service for ML Embeddings. | Metal is a managed service for ML Embeddings. ->: Metal | ü¶úÔ∏èüîó Langchain |
2,238 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversMetalOn this pageMetalMetal is a managed service for ML Embeddings.This notebook shows how to use Metal's retriever.First, you will need to sign up for Metal and get an API key. You can do so here# !pip install metal_sdkfrom metal_sdk.metal import MetalAPI_KEY = ""CLIENT_ID = ""INDEX_ID = ""metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);Ingest Documents‚ÄãYou only need to do this if you haven't already set up an indexmetal.index({"text": "foo1"})metal.index({"text": "foo"}) {'data': {'id': '642739aa7559b026b4430e42', 'text': 'foo', 'createdAt': '2023-03-31T19:51:06.748Z'}}Query‚ÄãNow that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import MetalRetrieverretriever = MetalRetriever(metal, params={"limit": 2})retriever.get_relevant_documents("foo1") [Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}), Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]PreviousLOTR (Merger Retriever)NextPinecone Hybrid SearchIngest DocumentsQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, | Metal is a managed service for ML Embeddings. | Metal is a managed service for ML Embeddings. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversMetalOn this pageMetalMetal is a managed service for ML Embeddings.This notebook shows how to use Metal's retriever.First, you will need to sign up for Metal and get an API key. You can do so here# !pip install metal_sdkfrom metal_sdk.metal import MetalAPI_KEY = ""CLIENT_ID = ""INDEX_ID = ""metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);Ingest Documents‚ÄãYou only need to do this if you haven't already set up an indexmetal.index({"text": "foo1"})metal.index({"text": "foo"}) {'data': {'id': '642739aa7559b026b4430e42', 'text': 'foo', 'createdAt': '2023-03-31T19:51:06.748Z'}}Query‚ÄãNow that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import MetalRetrieverretriever = MetalRetriever(metal, params={"limit": 2})retriever.get_relevant_documents("foo1") [Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}), Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]PreviousLOTR (Merger Retriever)NextPinecone Hybrid SearchIngest DocumentsQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, |
2,239 | © 2023 LangChain, Inc. | Metal is a managed service for ML Embeddings. | Metal is a managed service for ML Embeddings. ->: © 2023 LangChain, Inc. |
2,240 | TF-IDF | ü¶úÔ∏èüîó Langchain | TF-IDF means term-frequency times inverse document-frequency. | TF-IDF means term-frequency times inverse document-frequency. ->: TF-IDF | ü¶úÔ∏èüîó Langchain |
2,241 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversTF-IDFOn this pageTF-IDFTF-IDF means term-frequency times inverse document-frequency.This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package.For more information on the details of TF-IDF see this blog post.# !pip install scikit-learnfrom langchain.retrievers import TFIDFRetrieverCreate New Retriever with Texts‚Äãretriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])Create a New Retriever with Documents‚ÄãYou can now create a new retriever with the documents you created.from langchain.schema import Documentretriever = TFIDFRetriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])Use Retriever‚ÄãWe can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]Save and load‚ÄãYou can easily save and load this retriever, making it handy for local development!retriever.save_local("testing.pkl")retriever_copy | TF-IDF means term-frequency times inverse document-frequency. | TF-IDF means term-frequency times inverse document-frequency. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversTF-IDFOn this pageTF-IDFTF-IDF means term-frequency times inverse document-frequency.This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package.For more information on the details of TF-IDF see this blog post.# !pip install scikit-learnfrom langchain.retrievers import TFIDFRetrieverCreate New Retriever with Texts‚Äãretriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])Create a New Retriever with Documents‚ÄãYou can now create a new retriever with the documents you created.from langchain.schema import Documentretriever = TFIDFRetriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])Use Retriever‚ÄãWe can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]Save and load‚ÄãYou can easily save and load this retriever, making it handy for local development!retriever.save_local("testing.pkl")retriever_copy |
2,242 | = TFIDFRetriever.load_local("testing.pkl")retriever_copy.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousTavily Search APINextVespaCreate New Retriever with TextsCreate a New Retriever with DocumentsUse RetrieverSave and loadCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | TF-IDF means term-frequency times inverse document-frequency. | TF-IDF means term-frequency times inverse document-frequency. ->: = TFIDFRetriever.load_local("testing.pkl")retriever_copy.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousTavily Search APINextVespaCreate New Retriever with TextsCreate a New Retriever with DocumentsUse RetrieverSave and loadCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,243 | Cohere Reranker | ü¶úÔ∏èüîó Langchain | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: Cohere Reranker | ü¶úÔ∏èüîó Langchain |
2,244 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversCohere RerankerOn this pageCohere RerankerCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.This notebook shows how to use Cohere's rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.#!pip install cohere#!pip install faiss# OR (depending on Python version)#!pip install faiss-cpu# get a new token: https://dashboard.cohere.ai/import osimport getpassos.environ["COHERE_API_KEY"] = getpass.getpass("Cohere API Key:")os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")# Helper function for printing docsdef pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) )Set up the base vector store retriever‚ÄãLet's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.from langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSdocuments = | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversCohere RerankerOn this pageCohere RerankerCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.This notebook shows how to use Cohere's rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.#!pip install cohere#!pip install faiss# OR (depending on Python version)#!pip install faiss-cpu# get a new token: https://dashboard.cohere.ai/import osimport getpassos.environ["COHERE_API_KEY"] = getpass.getpass("Cohere API Key:")os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")# Helper function for printing docsdef pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) )Set up the base vector store retriever‚ÄãLet's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.from langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSdocuments = |
2,245 | langchain.vectorstores import FAISSdocuments = TextLoader("../../modules/state_of_the_union.txt").load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever( search_kwargs={"k": 20})query = "What did the president say about Ketanji Brown Jackson"docs = retriever.get_relevant_documents(query)pretty_print_docs(docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: langchain.vectorstores import FAISSdocuments = TextLoader("../../modules/state_of_the_union.txt").load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever( search_kwargs={"k": 20})query = "What did the president say about Ketanji Brown Jackson"docs = retriever.get_relevant_documents(query)pretty_print_docs(docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. |
2,246 | the Border and fix the immigration system. ---------------------------------------------------------------------------------------------------- Document 4: He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. ---------------------------------------------------------------------------------------------------- Document 5: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 6: Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. America used to have the best roads, bridges, and airports on Earth. Now our infrastructure is ranked 13th in the world. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: the Border and fix the immigration system. ---------------------------------------------------------------------------------------------------- Document 4: He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. ---------------------------------------------------------------------------------------------------- Document 5: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 6: Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. America used to have the best roads, bridges, and airports on Earth. Now our infrastructure is ranked 13th in the world. |
2,247 | infrastructure is ranked 13th in the world. ---------------------------------------------------------------------------------------------------- Document 7: And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices. ---------------------------------------------------------------------------------------------------- Document 8: For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. ---------------------------------------------------------------------------------------------------- Document 9: All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. ---------------------------------------------------------------------------------------------------- Document 10: I’m also calling on | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: infrastructure is ranked 13th in the world. ---------------------------------------------------------------------------------------------------- Document 7: And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices. ---------------------------------------------------------------------------------------------------- Document 8: For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. ---------------------------------------------------------------------------------------------------- Document 9: All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. ---------------------------------------------------------------------------------------------------- Document 10: I’m also calling on |
2,248 | Document 10: I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. ---------------------------------------------------------------------------------------------------- Document 11: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. ---------------------------------------------------------------------------------------------------- Document 12: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. ---------------------------------------------------------------------------------------------------- Document 13: I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: Document 10: I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. ---------------------------------------------------------------------------------------------------- Document 11: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. ---------------------------------------------------------------------------------------------------- Document 12: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. ---------------------------------------------------------------------------------------------------- Document 13: I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. |
2,249 | committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. ---------------------------------------------------------------------------------------------------- Document 14: And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery. ---------------------------------------------------------------------------------------------------- Document 15: Third, support our veterans. Veterans are the best of us. I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. ---------------------------------------------------------------------------------------------------- Document 16: When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know you’re tired, frustrated, and exhausted. But I also know this. ---------------------------------------------------------------------------------------------------- | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. ---------------------------------------------------------------------------------------------------- Document 14: And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery. ---------------------------------------------------------------------------------------------------- Document 15: Third, support our veterans. Veterans are the best of us. I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. ---------------------------------------------------------------------------------------------------- Document 16: When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know you’re tired, frustrated, and exhausted. But I also know this. ---------------------------------------------------------------------------------------------------- |
2,250 | Document 17: Now is the hour. Our moment of responsibility. Our test of resolve and conscience, of history itself. It is in this moment that our character is formed. Our purpose is found. Our future is forged. Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life. ---------------------------------------------------------------------------------------------------- Document 18: He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. ---------------------------------------------------------------------------------------------------- Document 19: I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis. ---------------------------------------------------------------------------------------------------- Document 20: So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: Document 17: Now is the hour. Our moment of responsibility. Our test of resolve and conscience, of history itself. It is in this moment that our character is formed. Our purpose is found. Our future is forged. Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life. ---------------------------------------------------------------------------------------------------- Document 18: He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. ---------------------------------------------------------------------------------------------------- Document 19: I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis. ---------------------------------------------------------------------------------------------------- Document 20: So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the |
2,251 | enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.Doing reranking with CohereRerank​Now let's wrap our base retriever with a ContextualCompressionRetriever. We'll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.from langchain.llms import OpenAIfrom langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import CohereRerankllm = OpenAI(temperature=0)compressor = CohereRerank()compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.Doing reranking with CohereRerank​Now let's wrap our base retriever with a ContextualCompressionRetriever. We'll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.from langchain.llms import OpenAIfrom langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import CohereRerankllm = OpenAI(temperature=0)compressor = CohereRerank()compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in |
2,252 | Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.You can of course use this retriever within a QA pipelinefrom langchain.chains import RetrievalQAchain = RetrievalQA.from_chain_type( llm=OpenAI(temperature=0), retriever=compression_retriever)chain({"query": query}) {'query': 'What did the president say about Ketanji Brown Jackson', 'result': " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."}PreviousChatGPT PluginNextDocArraySet up the base vector store retrieverDoing reranking with CohereRerankCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. | Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.You can of course use this retriever within a QA pipelinefrom langchain.chains import RetrievalQAchain = RetrievalQA.from_chain_type( llm=OpenAI(temperature=0), retriever=compression_retriever)chain({"query": query}) {'query': 'What did the president say about Ketanji Brown Jackson', 'result': " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."}PreviousChatGPT PluginNextDocArraySet up the base vector store retrieverDoing reranking with CohereRerankCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,253 | LLMs | 🦜�🔗 Langchain | Features (natively supported) | Features (natively supported) ->: LLMs | 🦜�🔗 Langchain |
2,254 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOn this pageLLMsFeatures (natively supported)​All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below:Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread.Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying LLM provider. This obviously doesn't give you token-by-token streaming, which requires native support from the LLM provider, but ensures your code that expects an iterator of tokens can work for any of our LLM integrations.Batch | Features (natively supported) | Features (natively supported) ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOn this pageLLMsFeatures (natively supported)​All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below:Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread.Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying LLM provider. This obviously doesn't give you token-by-token streaming, which requires native support from the LLM provider, but ensures your code that expects an iterator of tokens can work for any of our LLM integrations.Batch |
2,255 | can work for any of our LLM integrations.Batch support defaults to calling the underlying LLM in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig.Each LLM integration can optionally provide native implementations for async, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support.ModelInvokeAsync invokeStreamAsync streamBatchAsync | Features (natively supported) | Features (natively supported) ->: can work for any of our LLM integrations.Batch support defaults to calling the underlying LLM in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig.Each LLM integration can optionally provide native implementations for async, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support.ModelInvokeAsync invokeStreamAsync streamBatchAsync |
2,256 | batchAI21✅�����AlephAlpha✅�����AmazonAPIGateway✅�����Anthropic✅✅✅✅��Anyscale✅✅✅✅✅✅Arcee✅�����Aviary✅�����AzureMLOnlineEndpoint✅�����AzureOpenAI✅✅✅✅✅✅Banana✅�����Baseten✅�����Beam✅�����Bedrock✅�✅���CTransformers✅✅����CTranslate2✅���✅�CerebriumAI✅�����ChatGLM✅�����Clarifai✅�����Cohere✅✅����Databricks✅�����DeepInfra✅�����DeepSparse✅✅✅✅��EdenAI✅✅����Fireworks✅✅✅✅��ForefrontAI✅�����GPT4All✅�����GooglePalm✅���✅�GooseAI✅�����GradientLLM✅✅��✅✅HuggingFaceEndpoint✅�����HuggingFaceHub✅�����HuggingFacePipeline✅���✅�HuggingFaceTextGenInference✅✅✅✅��HumanInputLLM✅�����JavelinAIGateway✅✅����KoboldApiLLM✅�����LlamaCpp✅�✅���ManifestWrapper✅�����Minimax✅�����MlflowAIGateway✅�����Modal✅�����MosaicML✅�����NIBittensorLLM✅�����NLPCloud✅�����Nebula✅�����OctoAIEndpoint✅�����Ollama✅�����OpaquePrompts✅�����OpenAI✅✅✅✅✅✅OpenLLM✅✅����OpenLM✅✅✅✅✅✅Petals✅�����PipelineAI✅�����Predibase✅�����PredictionGuard✅�����PromptLayerOpenAI✅�����QianfanLLMEndpoint✅✅✅✅��RWKV✅�����Replicate✅�✅���SagemakerEndpoint✅�����SelfHostedHuggingFaceLLM✅�����SelfHostedPipeline✅�����StochasticAI✅�����TextGen✅�����TitanTakeoff✅�✅���Tongyi✅�����VLLM✅���✅�VLLMOpenAI✅✅✅✅✅✅VertexAI✅✅✅�✅✅VertexAIModelGarden✅✅� | Features (natively supported) | Features (natively supported) ->: batchAI21✅�����AlephAlpha✅�����AmazonAPIGateway✅�����Anthropic✅✅✅✅��Anyscale✅✅✅✅✅✅Arcee✅�����Aviary✅�����AzureMLOnlineEndpoint✅�����AzureOpenAI✅✅✅✅✅✅Banana✅�����Baseten✅�����Beam✅�����Bedrock✅�✅���CTransformers✅✅����CTranslate2✅���✅�CerebriumAI✅�����ChatGLM✅�����Clarifai✅�����Cohere✅✅����Databricks✅�����DeepInfra✅�����DeepSparse✅✅✅✅��EdenAI✅✅����Fireworks✅✅✅✅��ForefrontAI✅�����GPT4All✅�����GooglePalm✅���✅�GooseAI✅�����GradientLLM✅✅��✅✅HuggingFaceEndpoint✅�����HuggingFaceHub✅�����HuggingFacePipeline✅���✅�HuggingFaceTextGenInference✅✅✅✅��HumanInputLLM✅�����JavelinAIGateway✅✅����KoboldApiLLM✅�����LlamaCpp✅�✅���ManifestWrapper✅�����Minimax✅�����MlflowAIGateway✅�����Modal✅�����MosaicML✅�����NIBittensorLLM✅�����NLPCloud✅�����Nebula✅�����OctoAIEndpoint✅�����Ollama✅�����OpaquePrompts✅�����OpenAI✅✅✅✅✅✅OpenLLM✅✅����OpenLM✅✅✅✅✅✅Petals✅�����PipelineAI✅�����Predibase✅�����PredictionGuard✅�����PromptLayerOpenAI✅�����QianfanLLMEndpoint✅✅✅✅��RWKV✅�����Replicate✅�✅���SagemakerEndpoint✅�����SelfHostedHuggingFaceLLM✅�����SelfHostedPipeline✅�����StochasticAI✅�����TextGen✅�����TitanTakeoff✅�✅���Tongyi✅�����VLLM✅���✅�VLLMOpenAI✅✅✅✅✅✅VertexAI✅✅✅�✅✅VertexAIModelGarden✅✅� |
2,257 | exAI✅✅✅�✅✅VertexAIModelGarden✅✅��✅✅Writer✅�����Xinference✅�����YandexGPT✅�����📄� | Features (natively supported) | Features (natively supported) ->: exAI✅✅✅�✅✅VertexAIModelGarden✅✅��✅✅Writer✅�����Xinference✅�����YandexGPT✅�����📄� |
2,258 | LLMsFeatures (natively supported)📄� AI21AI21 Studio provides API access to Jurassic-2 large language models.📄� Aleph AlphaThe Luminous series is a family of large language models.📄� Amazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.📄� AnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications📄� ArceeThis notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).📄� Azure MLAzure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.📄� Azure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.📄� Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.📄� BananaBanana is focused on building the machine learning infrastructure.📄� BasetenBaseten provides all the | Features (natively supported) | Features (natively supported) ->: LLMsFeatures (natively supported)📄� AI21AI21 Studio provides API access to Jurassic-2 large language models.📄� Aleph AlphaThe Luminous series is a family of large language models.📄� Amazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.📄� AnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications📄� ArceeThis notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).📄� Azure MLAzure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.📄� Azure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.📄� Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.📄� BananaBanana is focused on building the machine learning infrastructure.📄� BasetenBaseten provides all the |
2,259 | BasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.📄� BeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.📄� BedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case📄� BittensorBittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.📄� CerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.📄� ChatGLMChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).📄� ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.📄� CohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.📄� C TransformersThe C Transformers library provides Python bindings for GGML models.📄� CTranslate2CTranslate2 is a C++ and Python library for efficient inference with Transformer models.📄� DatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.📄� DeepInfraDeepInfra provides several | Features (natively supported) | Features (natively supported) ->: BasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.📄� BeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.📄� BedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case📄� BittensorBittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.📄� CerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.📄� ChatGLMChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).📄� ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.📄� CohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.📄� C TransformersThe C Transformers library provides Python bindings for GGML models.📄� CTranslate2CTranslate2 is a C++ and Python library for efficient inference with Transformer models.📄� DatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.📄� DeepInfraDeepInfra provides several |
2,260 | DeepInfraDeepInfra provides several LLMs.📄� DeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain.📄� Eden AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)📄� FireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.📄� ForefrontAIThe Forefront platform gives you the ability to fine-tune and use open-source large language models.📄� GCP Vertex AINote: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.📄� GooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.📄� GPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.📄� GradientGradient allows to fine tune and get completions on LLMs with a simple web API.📄� Hugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.📄� Hugging Face Local PipelinesHugging Face models can be run locally through the HuggingFacePipeline class.📄� Huggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.📄� Javelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin | Features (natively supported) | Features (natively supported) ->: DeepInfraDeepInfra provides several LLMs.📄� DeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain.📄� Eden AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)📄� FireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.📄� ForefrontAIThe Forefront platform gives you the ability to fine-tune and use open-source large language models.📄� GCP Vertex AINote: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.📄� GooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.📄� GPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.📄� GradientGradient allows to fine tune and get completions on LLMs with a simple web API.📄� Hugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.📄� Hugging Face Local PipelinesHugging Face models can be run locally through the HuggingFacePipeline class.📄� Huggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.📄� Javelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin |
2,261 | will explore how to interact with the Javelin AI Gateway using the Python SDK.📄� JSONFormerJSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.📄� KoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.📄� Llama.cppllama-cpp-python is a Python binding for llama.cpp.📄� LLM Caching integrationsThis notebook covers how to cache results of individual LLM calls using different caches.📄� ManifestThis notebook goes over how to use Manifest and LangChain.📄� MinimaxMinimax is a Chinese startup that provides natural language processing models for companies and individuals.📄� ModalThe Modal cloud platform provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.📄� MosaicMLMosaicML offers a managed inference service. You can either use a variety of open-source models, or deploy your own.📄� NLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.📄� OctoAIOctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.📄� OllamaOllama allows you to run open-source large language models, such as Llama 2, | Features (natively supported) | Features (natively supported) ->: will explore how to interact with the Javelin AI Gateway using the Python SDK.📄� JSONFormerJSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.📄� KoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.📄� Llama.cppllama-cpp-python is a Python binding for llama.cpp.📄� LLM Caching integrationsThis notebook covers how to cache results of individual LLM calls using different caches.📄� ManifestThis notebook goes over how to use Manifest and LangChain.📄� MinimaxMinimax is a Chinese startup that provides natural language processing models for companies and individuals.📄� ModalThe Modal cloud platform provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.📄� MosaicMLMosaicML offers a managed inference service. You can either use a variety of open-source models, or deploy your own.📄� NLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.📄� OctoAIOctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.📄� OllamaOllama allows you to run open-source large language models, such as Llama 2, |
2,262 | large language models, such as Llama 2, locally.📄� OpaquePromptsOpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.📄� OpenAIOpenAI offers a spectrum of models with different levels of power suitable for different tasks.📄� OpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.📄� OpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.📄� PetalsPetals runs 100B+ language models at home, BitTorrent-style.📄� PipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.📄� PredibasePredibase allows you to train, fine-tune, and deploy any ML model—from linear regression to large language model.📄� Prediction GuardBasic LLM usage📄� PromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.📄� RELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.📄� ReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.📄� | Features (natively supported) | Features (natively supported) ->: large language models, such as Llama 2, locally.📄� OpaquePromptsOpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.📄� OpenAIOpenAI offers a spectrum of models with different levels of power suitable for different tasks.📄� OpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.📄� OpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.📄� PetalsPetals runs 100B+ language models at home, BitTorrent-style.📄� PipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.📄� PredibasePredibase allows you to train, fine-tune, and deploy any ML model—from linear regression to large language model.📄� Prediction GuardBasic LLM usage📄� PromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.📄� RELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.📄� ReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.📄� |
2,263 | makes it easy to deploy them at scale.📄� RunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.📄� SageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.📄� StochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.📄� Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.📄� TextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.📄� Titan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.📄� Together AIThe Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai📄� Tongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.📄� vLLMvLLM is a fast and easy-to-use library for LLM inference and serving, offering:📄� WriterWriter | Features (natively supported) | Features (natively supported) ->: makes it easy to deploy them at scale.📄� RunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.📄� SageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.📄� StochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.📄� Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.📄� TextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.📄� Titan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.📄� Together AIThe Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai📄� Tongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.📄� vLLMvLLM is a fast and easy-to-use library for LLM inference and serving, offering:📄� WriterWriter |
2,264 | and serving, offering:📄� WriterWriter is a platform to generate different language content.📄� Xorbits Inference (Xinference)Xinference is a powerful and versatile library designed to serve LLMs,📄� YandexGPTThis notebook goes over how to use Langchain with YandexGPT.PreviousComponentsNextLLMsFeatures (natively supported)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Features (natively supported) | Features (natively supported) ->: and serving, offering:📄� WriterWriter is a platform to generate different language content.📄� Xorbits Inference (Xinference)Xinference is a powerful and versatile library designed to serve LLMs,📄� YandexGPTThis notebook goes over how to use Langchain with YandexGPT.PreviousComponentsNextLLMsFeatures (natively supported)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,265 | Baichuan Chat | 🦜�🔗 Langchain | Baichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/api | Baichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/api ->: Baichuan Chat | 🦜�🔗 Langchain |
2,266 | Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonko🚅 LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsBaichuan ChatOn this pageBaichuan ChatBaichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/apifrom langchain.chat_models import ChatBaichuanfrom langchain.schema import HumanMessagechat = ChatBaichuan( baichuan_api_key='YOUR_API_KEY', baichuan_secret_key='YOUR_SECRET_KEY')or you can set api_key and secret_key in your environment variablesexport BAICHUAN_API_KEY=YOUR_API_KEYexport BAICHUAN_SECRET_KEY=YOUR_SECRET_KEYchat([ HumanMessage(content='我日薪8å�—钱,请问在闰年的二月,我月薪多少')]) AIMessage(content='首先,我们需è¦�确定闰年的二月有多少天。闰年的二月有29天。\n\n然å��,我们å�¯ä»¥è®¡ç®—ä½ çš„æœˆè–ªï¼š\n\n日薪 = 月薪 / (当月天数)\n\næ‰€ä»¥ï¼Œä½ çš„æœˆè–ª = 日薪 * 当月天数\n\n将数值代入公å¼�:\n\n月薪 = 8å…ƒ/天 * 29天 = 232å…ƒ\n\nå› æ¤ï¼Œä½ 在闰年的二月的月薪是232元。')For ChatBaichuan with Streaming​chat = ChatBaichuan( baichuan_api_key='YOUR_API_KEY', baichuan_secret_key='YOUR_SECRET_KEY', streaming=True)chat([ HumanMessage(content='我日薪8å�—钱,请问在闰年的二月,我月薪多少')]) AIMessageChunk(content='首先,我们需è¦�确定闰年的二月有多少天。闰年的二月有29天。\n\n然å��,我们å�¯ä»¥è®¡ç®—ä½ çš„æœˆè–ªï¼š\n\n日薪 = 月薪 / | Baichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/api | Baichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/api ->: Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonko🚅 LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsBaichuan ChatOn this pageBaichuan ChatBaichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/apifrom langchain.chat_models import ChatBaichuanfrom langchain.schema import HumanMessagechat = ChatBaichuan( baichuan_api_key='YOUR_API_KEY', baichuan_secret_key='YOUR_SECRET_KEY')or you can set api_key and secret_key in your environment variablesexport BAICHUAN_API_KEY=YOUR_API_KEYexport BAICHUAN_SECRET_KEY=YOUR_SECRET_KEYchat([ HumanMessage(content='我日薪8å�—钱,请问在闰年的二月,我月薪多少')]) AIMessage(content='首先,我们需è¦�确定闰年的二月有多少天。闰年的二月有29天。\n\n然å��,我们å�¯ä»¥è®¡ç®—ä½ çš„æœˆè–ªï¼š\n\n日薪 = 月薪 / (当月天数)\n\næ‰€ä»¥ï¼Œä½ çš„æœˆè–ª = 日薪 * 当月天数\n\n将数值代入公å¼�:\n\n月薪 = 8å…ƒ/天 * 29天 = 232å…ƒ\n\nå› æ¤ï¼Œä½ 在闰年的二月的月薪是232元。')For ChatBaichuan with Streaming​chat = ChatBaichuan( baichuan_api_key='YOUR_API_KEY', baichuan_secret_key='YOUR_SECRET_KEY', streaming=True)chat([ HumanMessage(content='我日薪8å�—钱,请问在闰年的二月,我月薪多少')]) AIMessageChunk(content='首先,我们需è¦�确定闰年的二月有多少天。闰年的二月有29天。\n\n然å��,我们å�¯ä»¥è®¡ç®—ä½ çš„æœˆè–ªï¼š\n\n日薪 = 月薪 / |
2,267 | = 月薪 / (当月天数)\n\næ‰€ä»¥ï¼Œä½ çš„æœˆè–ª = 日薪 * 当月天数\n\n将数值代入公å¼�:\n\n月薪 = 8å…ƒ/天 * 29天 = 232å…ƒ\n\nå› æ¤ï¼Œä½ 在闰年的二月的月薪是232元。')PreviousAzureML Chat Online EndpointNextBaidu QianfanFor ChatBaichuan with StreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Baichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/api | Baichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/api ->: = 月薪 / (当月天数)\n\næ‰€ä»¥ï¼Œä½ çš„æœˆè–ª = 日薪 * 当月天数\n\n将数值代入公å¼�:\n\n月薪 = 8å…ƒ/天 * 29天 = 232å…ƒ\n\nå› æ¤ï¼Œä½ 在闰年的二月的月薪是232元。')PreviousAzureML Chat Online EndpointNextBaidu QianfanFor ChatBaichuan with StreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,268 | Bedrock Chat | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsBedrock ChatOn this pageBedrock ChatAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case%pip install boto3from langchain.chat_models import BedrockChatfrom langchain.schema import HumanMessagechat = BedrockChat(model_id="anthropic.claude-v2", model_kwargs={"temperature":0.1})messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" Voici la traduction en fran√ßais : J'adore programmer.", additional_kwargs={}, example=False)For BedrockChat with Streaming‚Äãfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat = BedrockChat( model_id="anthropic.claude-v2", streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model_kwargs={"temperature": 0.1},)messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages)PreviousBaidu QianfanNextCohereFor BedrockChat with StreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case | Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case ->: Bedrock Chat | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsBedrock ChatOn this pageBedrock ChatAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case%pip install boto3from langchain.chat_models import BedrockChatfrom langchain.schema import HumanMessagechat = BedrockChat(model_id="anthropic.claude-v2", model_kwargs={"temperature":0.1})messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" Voici la traduction en fran√ßais : J'adore programmer.", additional_kwargs={}, example=False)For BedrockChat with Streaming‚Äãfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat = BedrockChat( model_id="anthropic.claude-v2", streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model_kwargs={"temperature": 0.1},)messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages)PreviousBaidu QianfanNextCohereFor BedrockChat with StreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,269 | Azure | ü¶úÔ∏èüîó Langchain | This notebook goes over how to connect to an Azure hosted OpenAI endpoint | This notebook goes over how to connect to an Azure hosted OpenAI endpoint ->: Azure | ü¶úÔ∏èüîó Langchain |
2,270 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsAzureOn this pageAzureThis notebook goes over how to connect to an Azure hosted OpenAI endpointfrom langchain.chat_models import AzureChatOpenAIfrom langchain.schema import HumanMessageBASE_URL = "https://${TODO}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "chat"model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure",)model( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ]) AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={})Model Version‚ÄãAzure OpenAI responses contain model property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deployment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with OpenAICallbackHandler.To solve this problem, you can pass model_version parameter to AzureChatOpenAI class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model.from langchain.callbacks import get_openai_callbackBASE_URL = | This notebook goes over how to connect to an Azure hosted OpenAI endpoint | This notebook goes over how to connect to an Azure hosted OpenAI endpoint ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonkoüöÖ LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsAzureOn this pageAzureThis notebook goes over how to connect to an Azure hosted OpenAI endpointfrom langchain.chat_models import AzureChatOpenAIfrom langchain.schema import HumanMessageBASE_URL = "https://${TODO}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "chat"model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure",)model( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ]) AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={})Model Version‚ÄãAzure OpenAI responses contain model property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deployment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with OpenAICallbackHandler.To solve this problem, you can pass model_version parameter to AzureChatOpenAI class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model.from langchain.callbacks import get_openai_callbackBASE_URL = |
2,271 | import get_openai_callbackBASE_URL = "https://{endpoint}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "gpt-35-turbo" # in Azure, this deployment has version 0613 - input and output tokens are counted separatelymodel = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure",)with get_openai_callback() as cb: model( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}") # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used Total Cost (USD): $0.000054We can provide the model version to AzureChatOpenAI constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly.model0613 = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure", model_version="0613")with get_openai_callback() as cb: model0613( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}") Total Cost (USD): $0.000044PreviousAnyscaleNextAzureML Chat Online EndpointModel VersionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook goes over how to connect to an Azure hosted OpenAI endpoint | This notebook goes over how to connect to an Azure hosted OpenAI endpoint ->: import get_openai_callbackBASE_URL = "https://{endpoint}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "gpt-35-turbo" # in Azure, this deployment has version 0613 - input and output tokens are counted separatelymodel = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure",)with get_openai_callback() as cb: model( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}") # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used Total Cost (USD): $0.000054We can provide the model version to AzureChatOpenAI constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly.model0613 = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure", model_version="0613")with get_openai_callback() as cb: model0613( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}") Total Cost (USD): $0.000044PreviousAnyscaleNextAzureML Chat Online EndpointModel VersionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,272 | AWS | ü¶úÔ∏èüîó Langchain | All functionality related to Amazon AWS platform | All functionality related to Amazon AWS platform ->: AWS | ü¶úÔ∏èüîó Langchain |
2,273 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersAWSOn this pageAWSAll functionality related to Amazon AWS platformLLMs‚ÄãBedrock‚ÄãSee a usage example.from langchain.llms.bedrock import BedrockAmazon API Gateway‚ÄãAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.See a usage example.from langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartmodel_kwargs = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}llm = AmazonAPIGateway(api_url=api_url, model_kwargs=model_kwargs)SageMaker Endpoint‚ÄãAmazon SageMaker is a system | All functionality related to Amazon AWS platform | All functionality related to Amazon AWS platform ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersAWSOn this pageAWSAll functionality related to Amazon AWS platformLLMs‚ÄãBedrock‚ÄãSee a usage example.from langchain.llms.bedrock import BedrockAmazon API Gateway‚ÄãAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.See a usage example.from langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartmodel_kwargs = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}llm = AmazonAPIGateway(api_url=api_url, model_kwargs=model_kwargs)SageMaker Endpoint‚ÄãAmazon SageMaker is a system |
2,274 | Endpoint‚ÄãAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.We use SageMaker to host our model and expose it as the SageMaker Endpoint.See a usage example.from langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerText Embedding Models‚ÄãBedrock‚ÄãSee a usage example.from langchain.embeddings import BedrockEmbeddingsSageMaker Endpoint‚ÄãSee a usage example.from langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.llms.sagemaker_endpoint import ContentHandlerBaseDocument loaders‚ÄãAWS S3 Directory and File‚ÄãAmazon Simple Storage Service (Amazon S3) is an object storage service. | All functionality related to Amazon AWS platform | All functionality related to Amazon AWS platform ->: Endpoint‚ÄãAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.We use SageMaker to host our model and expose it as the SageMaker Endpoint.See a usage example.from langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerText Embedding Models‚ÄãBedrock‚ÄãSee a usage example.from langchain.embeddings import BedrockEmbeddingsSageMaker Endpoint‚ÄãSee a usage example.from langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.llms.sagemaker_endpoint import ContentHandlerBaseDocument loaders‚ÄãAWS S3 Directory and File‚ÄãAmazon Simple Storage Service (Amazon S3) is an object storage service. |
2,275 | AWS S3 Directory
AWS S3 BucketsSee a usage example for S3DirectoryLoader.See a usage example for S3FileLoader.from langchain.document_loaders import S3DirectoryLoader, S3FileLoaderMemory‚ÄãAWS DynamoDB‚ÄãAWS DynamoDB
is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.We have to configure the AWS CLI. We need to install the boto3 library.pip install boto3See a usage example.from langchain.memory import DynamoDBChatMessageHistoryPreviousAnthropicNextGoogleLLMsBedrockAmazon API GatewaySageMaker EndpointText Embedding ModelsBedrockSageMaker EndpointDocument loadersAWS S3 Directory and FileMemoryAWS DynamoDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | All functionality related to Amazon AWS platform | All functionality related to Amazon AWS platform ->: AWS S3 Directory
AWS S3 BucketsSee a usage example for S3DirectoryLoader.See a usage example for S3FileLoader.from langchain.document_loaders import S3DirectoryLoader, S3FileLoaderMemory‚ÄãAWS DynamoDB‚ÄãAWS DynamoDB
is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.We have to configure the AWS CLI. We need to install the boto3 library.pip install boto3See a usage example.from langchain.memory import DynamoDBChatMessageHistoryPreviousAnthropicNextGoogleLLMsBedrockAmazon API GatewaySageMaker EndpointText Embedding ModelsBedrockSageMaker EndpointDocument loadersAWS S3 Directory and FileMemoryAWS DynamoDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,276 | Bedrock | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsBedrockBedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.%pip install boto3from langchain.embeddings import BedrockEmbeddingsembeddings = BedrockEmbeddings( credentials_profile_name="bedrock-admin", region_name="us-east-1")embeddings.embed_query("This is a content of the document")embeddings.embed_documents(["This is a content of the document", "This is another document"])# async embed queryawait embeddings.aembed_query("This is a content of the document")# async embed documentsawait embeddings.aembed_documents(["This is a content of the document", "This is another document"])PreviousBaidu QianfanNextBGE on Hugging FaceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. | Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. ->: Bedrock | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsBedrockBedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.%pip install boto3from langchain.embeddings import BedrockEmbeddingsembeddings = BedrockEmbeddings( credentials_profile_name="bedrock-admin", region_name="us-east-1")embeddings.embed_query("This is a content of the document")embeddings.embed_documents(["This is a content of the document", "This is another document"])# async embed queryawait embeddings.aembed_query("This is a content of the document")# async embed documentsawait embeddings.aembed_documents(["This is a content of the document", "This is another document"])PreviousBaidu QianfanNextBGE on Hugging FaceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,277 | LocalAI | ü¶úÔ∏èüîó Langchain | Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html. | Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html. ->: LocalAI | ü¶úÔ∏èüîó Langchain |
2,278 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsLocalAILocalAILet's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https://localai.io/basics/getting_started/index.html and https://localai.io/features/embeddings/index.html.from langchain.embeddings import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])Let's load the LocalAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = | Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html. | Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsLocalAILocalAILet's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https://localai.io/basics/getting_started/index.html and https://localai.io/features/embeddings/index.html.from langchain.embeddings import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])Let's load the LocalAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = |
2,279 | to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousLLMRailsNextMiniMaxCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html. | Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html. ->: to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousLLMRailsNextMiniMaxCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,280 | LLMRails | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsLLMRailsLLMRailsLet's load the LLMRails Embeddings class.To use LLMRails embedding you need to pass api key by argument or set it in environment with LLM_RAILS_API_KEY key.
To gey API Key you need to sign up in https://console.llmrails.com/signup and then go to https://console.llmrails.com/api-keys and copy key from there after creating one key in platform.from langchain.embeddings import LLMRailsEmbeddingsembeddings = LLMRailsEmbeddings(model='embedding-english-v1') # or embedding-multi-v1text = "This is a test document."To generate embeddings, you can either query an invidivual text, or you can query a list of texts.query_result = embeddings.embed_query(text)query_result[:5] [-0.09996652603149414, 0.015568195842206478, 0.17670190334320068, 0.16521021723747253, 0.21193109452724457]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.04242777079343796, 0.016536075621843338, 0.10052520781755447, 0.18272875249385834, 0.2079043835401535]PreviousLlama-cppNextLocalAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Let's load the LLMRails Embeddings class. | Let's load the LLMRails Embeddings class. ->: LLMRails | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsLLMRailsLLMRailsLet's load the LLMRails Embeddings class.To use LLMRails embedding you need to pass api key by argument or set it in environment with LLM_RAILS_API_KEY key.
To gey API Key you need to sign up in https://console.llmrails.com/signup and then go to https://console.llmrails.com/api-keys and copy key from there after creating one key in platform.from langchain.embeddings import LLMRailsEmbeddingsembeddings = LLMRailsEmbeddings(model='embedding-english-v1') # or embedding-multi-v1text = "This is a test document."To generate embeddings, you can either query an invidivual text, or you can query a list of texts.query_result = embeddings.embed_query(text)query_result[:5] [-0.09996652603149414, 0.015568195842206478, 0.17670190334320068, 0.16521021723747253, 0.21193109452724457]doc_result = embeddings.embed_documents([text])doc_result[0][:5] [-0.04242777079343796, 0.016536075621843338, 0.10052520781755447, 0.18272875249385834, 0.2079043835401535]PreviousLlama-cppNextLocalAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,281 | Elasticsearch | ü¶úÔ∏èüîó Langchain | Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch | Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch ->: Elasticsearch | ü¶úÔ∏èüîó Langchain |
2,282 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsElasticsearchOn this pageElasticsearchWalkthrough of how to generate embeddings using a hosted embedding model in ElasticsearchThe easiest way to instantiate the ElasticsearchEmbeddings class it eitherusing the from_credentials constructor if you are using Elastic Cloudor using the from_es_connection constructor with any Elasticsearch clusterpip -q install elasticsearch langchainimport elasticsearchfrom langchain.embeddings.elasticsearch import ElasticsearchEmbeddings# Define the model IDmodel_id = "your_model_id"Testing with from_credentials‚ÄãThis required an Elastic Cloud cloud_id# Instantiate ElasticsearchEmbeddings using credentialsembeddings = ElasticsearchEmbeddings.from_credentials( model_id, es_cloud_id="your_cloud_id", es_user="your_user", es_password="your_password",)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = | Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch | Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsElasticsearchOn this pageElasticsearchWalkthrough of how to generate embeddings using a hosted embedding model in ElasticsearchThe easiest way to instantiate the ElasticsearchEmbeddings class it eitherusing the from_credentials constructor if you are using Elastic Cloudor using the from_es_connection constructor with any Elasticsearch clusterpip -q install elasticsearch langchainimport elasticsearchfrom langchain.embeddings.elasticsearch import ElasticsearchEmbeddings# Define the model IDmodel_id = "your_model_id"Testing with from_credentials‚ÄãThis required an Elastic Cloud cloud_id# Instantiate ElasticsearchEmbeddings using credentialsembeddings = ElasticsearchEmbeddings.from_credentials( model_id, es_cloud_id="your_cloud_id", es_user="your_user", es_password="your_password",)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = |
2,283 | = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")Testing with Existing Elasticsearch client connection​This can be used with any Elasticsearch deployment# Create Elasticsearch connectiones_connection = Elasticsearch( hosts=["https://es_cluster_url:port"], basic_auth=("user", "password"))# Instantiate ElasticsearchEmbeddings using es_connectionembeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection,)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")PreviousEDEN AINextEmbaasTesting with from_credentialsTesting with Existing Elasticsearch client connectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch | Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch ->: = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")Testing with Existing Elasticsearch client connection​This can be used with any Elasticsearch deployment# Create Elasticsearch connectiones_connection = Elasticsearch( hosts=["https://es_cluster_url:port"], basic_auth=("user", "password"))# Instantiate ElasticsearchEmbeddings using es_connectionembeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection,)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")PreviousEDEN AINextEmbaasTesting with from_credentialsTesting with Existing Elasticsearch client connectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,284 | Aleph Alpha | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsAleph AlphaOn this pageAleph AlphaThere are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.Asymmetric‚Äãfrom langchain.embeddings import AlephAlphaAsymmetricSemanticEmbeddingdocument = "This is a content of the document"query = "What is the content of the document?"embeddings = AlephAlphaAsymmetricSemanticEmbedding(normalize=True, compress_to_size=128)doc_result = embeddings.embed_documents([document])query_result = embeddings.embed_query(query)Symmetric‚Äãfrom langchain.embeddings import AlephAlphaSymmetricSemanticEmbeddingtext = "This is a test text"embeddings = AlephAlphaSymmetricSemanticEmbedding(normalize=True, compress_to_size=128)doc_result = embeddings.embed_documents([text])query_result = embeddings.embed_query(text)PreviousText embedding modelsNextAwaDBAsymmetricSymmetricCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | There are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach. | There are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach. ->: Aleph Alpha | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsAleph AlphaOn this pageAleph AlphaThere are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.Asymmetric‚Äãfrom langchain.embeddings import AlephAlphaAsymmetricSemanticEmbeddingdocument = "This is a content of the document"query = "What is the content of the document?"embeddings = AlephAlphaAsymmetricSemanticEmbedding(normalize=True, compress_to_size=128)doc_result = embeddings.embed_documents([document])query_result = embeddings.embed_query(query)Symmetric‚Äãfrom langchain.embeddings import AlephAlphaSymmetricSemanticEmbeddingtext = "This is a test text"embeddings = AlephAlphaSymmetricSemanticEmbedding(normalize=True, compress_to_size=128)doc_result = embeddings.embed_documents([text])query_result = embeddings.embed_query(text)PreviousText embedding modelsNextAwaDBAsymmetricSymmetricCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,285 | DashScope | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsDashScopeDashScopeLet's load the DashScope Embedding class.from langchain.embeddings import DashScopeEmbeddingsembeddings = DashScopeEmbeddings( model="text-embedding-v1", dashscope_api_key="your-dashscope-api-key")text = "This is a test document."query_result = embeddings.embed_query(text)print(query_result)doc_results = embeddings.embed_documents(["foo"])print(doc_results)PreviousCohereNextDeepInfraCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Let's load the DashScope Embedding class. | Let's load the DashScope Embedding class. ->: DashScope | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsDashScopeDashScopeLet's load the DashScope Embedding class.from langchain.embeddings import DashScopeEmbeddingsembeddings = DashScopeEmbeddings( model="text-embedding-v1", dashscope_api_key="your-dashscope-api-key")text = "This is a test document."query_result = embeddings.embed_query(text)print(query_result)doc_results = embeddings.embed_documents(["foo"])print(doc_results)PreviousCohereNextDeepInfraCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,286 | Gradient | ü¶úÔ∏èüîó Langchain | Gradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API. | Gradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API. ->: Gradient | ü¶úÔ∏èüîó Langchain |
2,287 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsGradientOn this pageGradientGradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API.This notebook goes over how to use Langchain with Embeddings of Gradient.Imports‚Äãfrom langchain.embeddings import GradientEmbeddingsSet the Environment API Key‚ÄãMake sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.from getpass import getpassimport osif not os.environ.get("GRADIENT_ACCESS_TOKEN",None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")if not os.environ.get("GRADIENT_WORKSPACE_ID",None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace id:")Optional: Validate your environment variables GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID to get currently deployed models. Using the gradientai Python package.pip install gradientaiCreate the Gradient instance‚Äãdocuments = ["Pizza is a dish.","Paris is the capital of France", "numpy is a lib for linear algebra"]query = "Where is | Gradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API. | Gradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsGradientOn this pageGradientGradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API.This notebook goes over how to use Langchain with Embeddings of Gradient.Imports‚Äãfrom langchain.embeddings import GradientEmbeddingsSet the Environment API Key‚ÄãMake sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.from getpass import getpassimport osif not os.environ.get("GRADIENT_ACCESS_TOKEN",None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")if not os.environ.get("GRADIENT_WORKSPACE_ID",None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace id:")Optional: Validate your environment variables GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID to get currently deployed models. Using the gradientai Python package.pip install gradientaiCreate the Gradient instance‚Äãdocuments = ["Pizza is a dish.","Paris is the capital of France", "numpy is a lib for linear algebra"]query = "Where is |
2,288 | is a lib for linear algebra"]query = "Where is Paris?"embeddings = GradientEmbeddings( model="bge-large")documents_embedded = embeddings.embed_documents(documents)query_result = embeddings.embed_query(query)# (demo) compute similarityimport numpy as npscores = np.array(documents_embedded) @ np.array(query_result).Tdict(zip(documents, scores))PreviousGPT4AllNextHugging FaceImportsSet the Environment API KeyCreate the Gradient instanceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Gradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API. | Gradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API. ->: is a lib for linear algebra"]query = "Where is Paris?"embeddings = GradientEmbeddings( model="bge-large")documents_embedded = embeddings.embed_documents(documents)query_result = embeddings.embed_query(query)# (demo) compute similarityimport numpy as npscores = np.array(documents_embedded) @ np.array(query_result).Tdict(zip(documents, scores))PreviousGPT4AllNextHugging FaceImportsSet the Environment API KeyCreate the Gradient instanceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,289 | Jina | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsJinaJinaLet's load the Jina Embedding class.from langchain.embeddings import JinaEmbeddingsembeddings = JinaEmbeddings( jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])In the above example, ViT-B-32::openai, OpenAI's pretrained ViT-B-32 model is used. For a full list of models, see here.PreviousInstructEmbeddingsNextLlama-cppCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Let's load the Jina Embedding class. | Let's load the Jina Embedding class. ->: Jina | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsJinaJinaLet's load the Jina Embedding class.from langchain.embeddings import JinaEmbeddingsembeddings = JinaEmbeddings( jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])In the above example, ViT-B-32::openai, OpenAI's pretrained ViT-B-32 model is used. For a full list of models, see here.PreviousInstructEmbeddingsNextLlama-cppCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,290 | MosaicML | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsMosaicMLMosaicMLMosaicML offers a managed inference service. You can either use a variety of open-source models, or deploy your own.This example goes over how to use LangChain to interact with MosaicML Inference for text embedding.# sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchainfrom getpass import getpassMOSAICML_API_TOKEN = getpass()import osos.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKENfrom langchain.embeddings import MosaicMLInstructorEmbeddingsembeddings = MosaicMLInstructorEmbeddings( query_instruction="Represent the query for retrieval: ")query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}")PreviousModelScopeNextNLP CloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | MosaicML offers a managed inference service. You can either use a variety of open-source models, or deploy your own. | MosaicML offers a managed inference service. You can either use a variety of open-source models, or deploy your own. ->: MosaicML | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsMosaicMLMosaicMLMosaicML offers a managed inference service. You can either use a variety of open-source models, or deploy your own.This example goes over how to use LangChain to interact with MosaicML Inference for text embedding.# sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchainfrom getpass import getpassMOSAICML_API_TOKEN = getpass()import osos.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKENfrom langchain.embeddings import MosaicMLInstructorEmbeddingsembeddings = MosaicMLInstructorEmbeddings( query_instruction="Represent the query for retrieval: ")query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}")PreviousModelScopeNextNLP CloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,291 | AzureOpenAI | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsAzureOpenAIAzureOpenAILet's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.# set the environment variables needed for openai package to know to reach out to azureimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"os.environ["OPENAI_API_VERSION"] = "2023-05-15"from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousAwaDBNextBaidu QianfanCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Let's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints. | Let's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints. ->: AzureOpenAI | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsAzureOpenAIAzureOpenAILet's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.# set the environment variables needed for openai package to know to reach out to azureimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"os.environ["OPENAI_API_VERSION"] = "2023-05-15"from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousAwaDBNextBaidu QianfanCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,292 | Google Vertex AI PaLM | ü¶úÔ∏èüîó Langchain | Vertex AI PaLM API is a service on Google Cloud exposing the embedding models. | Vertex AI PaLM API is a service on Google Cloud exposing the embedding models. ->: Google Vertex AI PaLM | ü¶úÔ∏èüîó Langchain |
2,293 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsGoogle Vertex AI PaLMGoogle Vertex AI PaLMVertex AI PaLM API is a service on Google Cloud exposing the embedding models. Note: This integration is separate from the Google PaLM integration.By default, Google Cloud does not use Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install google-cloud-aiplatformfrom langchain.embeddings import VertexAIEmbeddingsembeddings = VertexAIEmbeddings()text = "This is a test document."query_result = | Vertex AI PaLM API is a service on Google Cloud exposing the embedding models. | Vertex AI PaLM API is a service on Google Cloud exposing the embedding models. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsGoogle Vertex AI PaLMGoogle Vertex AI PaLMVertex AI PaLM API is a service on Google Cloud exposing the embedding models. Note: This integration is separate from the Google PaLM integration.By default, Google Cloud does not use Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install google-cloud-aiplatformfrom langchain.embeddings import VertexAIEmbeddingsembeddings = VertexAIEmbeddings()text = "This is a test document."query_result = |
2,294 | = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousFake EmbeddingsNextGPT4AllCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Vertex AI PaLM API is a service on Google Cloud exposing the embedding models. | Vertex AI PaLM API is a service on Google Cloud exposing the embedding models. ->: = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousFake EmbeddingsNextGPT4AllCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,295 | NLP Cloud | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsNLP CloudNLP CloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. The embeddings endpoint offers the following model:paraphrase-multilingual-mpnet-base-v2: Paraphrase Multilingual MPNet Base V2 is a very fast model based on Sentence Transformers that is perfectly suited for embeddings extraction in more than 50 languages (see the full list here).pip install nlpcloudfrom langchain.embeddings import NLPCloudEmbeddingsimport osos.environ["NLPCLOUD_API_KEY"] = "xxx"nlpcloud_embd = NLPCloudEmbeddings()text = "This is a test document."query_result = nlpcloud_embd.embed_query(text)doc_result = nlpcloud_embd.embed_documents([text])PreviousMosaicMLNextOllamaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. | NLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. ->: NLP Cloud | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsNLP CloudNLP CloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. The embeddings endpoint offers the following model:paraphrase-multilingual-mpnet-base-v2: Paraphrase Multilingual MPNet Base V2 is a very fast model based on Sentence Transformers that is perfectly suited for embeddings extraction in more than 50 languages (see the full list here).pip install nlpcloudfrom langchain.embeddings import NLPCloudEmbeddingsimport osos.environ["NLPCLOUD_API_KEY"] = "xxx"nlpcloud_embd = NLPCloudEmbeddings()text = "This is a test document."query_result = nlpcloud_embd.embed_query(text)doc_result = nlpcloud_embd.embed_documents([text])PreviousMosaicMLNextOllamaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
2,296 | Embaas | ü¶úÔ∏èüîó Langchain | embaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models. | embaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models. ->: Embaas | ü¶úÔ∏èüîó Langchain |
2,297 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsEmbaasOn this pageEmbaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.In this tutorial, we will show you how to use the embaas Embeddings API to generate embeddings for a given text.Prerequisites‚ÄãCreate your free embaas account at https://embaas.io/register and generate an API key.# Set API keyembaas_api_key = "YOUR_API_KEY"# or set environment variableos.environ["EMBAAS_API_KEY"] = "YOUR_API_KEY"from langchain.embeddings import EmbaasEmbeddingsembeddings = EmbaasEmbeddings()# Create embeddings for a single documentdoc_text = "This is a test document."doc_text_embedding = embeddings.embed_query(doc_text)# Print created embeddingprint(doc_text_embedding)# Create embeddings for multiple documentsdoc_texts = ["This is a test document.", "This is another test document."]doc_texts_embeddings = embeddings.embed_documents(doc_texts)# Print created embeddingsfor i, doc_text_embedding in enumerate(doc_texts_embeddings): print(f"Embedding for document {i + 1}: {doc_text_embedding}")# Using a different model and/or custom instructionembeddings = EmbaasEmbeddings( model="instructor-large", | embaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models. | embaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsAleph AlphaAwaDBAzureOpenAIBaidu QianfanBedrockBGE on Hugging FaceClarifaiCohereDashScopeDeepInfraEDEN AIElasticsearchEmbaasERNIE Embedding-V1Fake EmbeddingsGoogle Vertex AI PaLMGPT4AllGradientHugging FaceInstructEmbeddingsJinaLlama-cppLLMRailsLocalAIMiniMaxModelScopeMosaicMLNLP CloudOllamaOpenAISageMakerSelf HostedSentence TransformersSpaCyTensorflowHubXorbits inference (Xinference)Vector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsText embedding modelsEmbaasOn this pageEmbaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.In this tutorial, we will show you how to use the embaas Embeddings API to generate embeddings for a given text.Prerequisites‚ÄãCreate your free embaas account at https://embaas.io/register and generate an API key.# Set API keyembaas_api_key = "YOUR_API_KEY"# or set environment variableos.environ["EMBAAS_API_KEY"] = "YOUR_API_KEY"from langchain.embeddings import EmbaasEmbeddingsembeddings = EmbaasEmbeddings()# Create embeddings for a single documentdoc_text = "This is a test document."doc_text_embedding = embeddings.embed_query(doc_text)# Print created embeddingprint(doc_text_embedding)# Create embeddings for multiple documentsdoc_texts = ["This is a test document.", "This is another test document."]doc_texts_embeddings = embeddings.embed_documents(doc_texts)# Print created embeddingsfor i, doc_text_embedding in enumerate(doc_texts_embeddings): print(f"Embedding for document {i + 1}: {doc_text_embedding}")# Using a different model and/or custom instructionembeddings = EmbaasEmbeddings( model="instructor-large", |
2,298 | EmbaasEmbeddings( model="instructor-large", instruction="Represent the Wikipedia document for retrieval",)For more detailed information about the embaas Embeddings API, please refer to the official embaas API documentation.PreviousElasticsearchNextERNIE Embedding-V1PrerequisitesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | embaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models. | embaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models. ->: EmbaasEmbeddings( model="instructor-large", instruction="Represent the Wikipedia document for retrieval",)For more detailed information about the embaas Embeddings API, please refer to the official embaas API documentation.PreviousElasticsearchNextERNIE Embedding-V1PrerequisitesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,299 | Self Hosted | ü¶úÔ∏èüîó Langchain | Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. | Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. ->: Self Hosted | ü¶úÔ∏èüîó Langchain |
Subsets and Splits