Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
2,100 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSEC filingOn this pageSEC filingThe SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes.SEC filings data powered by Kay.ai and Cybersyn via Snowflake Marketplace.Setup​First, you will need to install the kay package. You will also need an API key: you can get one for free at https://kay.ai. Once you have an API key, you must set it as an environment variable KAY_API_KEY.In this example, we're going to use the KayAiRetriever. Take a look at the kay notebook for more detailed information for the parameters that it accepts.`# Setup API keys for Kay and OpenAIfrom getpass import getpassKAY_API_KEY = getpass()OPENAI_API_KEY = getpass() ········ ········import osos.environ["KAY_API_KEY"] = KAY_API_KEYos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYExample​from langchain.chains import ConversationalRetrievalChainfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers import KayAiRetrievermodel = | The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes. | The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSEC filingOn this pageSEC filingThe SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes.SEC filings data powered by Kay.ai and Cybersyn via Snowflake Marketplace.Setup​First, you will need to install the kay package. You will also need an API key: you can get one for free at https://kay.ai. Once you have an API key, you must set it as an environment variable KAY_API_KEY.In this example, we're going to use the KayAiRetriever. Take a look at the kay notebook for more detailed information for the parameters that it accepts.`# Setup API keys for Kay and OpenAIfrom getpass import getpassKAY_API_KEY = getpass()OPENAI_API_KEY = getpass() ········ ········import osos.environ["KAY_API_KEY"] = KAY_API_KEYos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYExample​from langchain.chains import ConversationalRetrievalChainfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers import KayAiRetrievermodel = |
2,101 | langchain.retrievers import KayAiRetrievermodel = ChatOpenAI(model_name="gpt-3.5-turbo")retriever = KayAiRetriever.create(dataset_id="company", data_types=["10-K", "10-Q"], num_contexts=6)qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What are patterns in Nvidia's spend over the past three quarters?", #"What are some recent challenges faced by the renewable energy sector?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are patterns in Nvidia's spend over the past three quarters? **Answer**: Based on the provided information, here are the patterns in NVIDIA's spend over the past three quarters: 1. Research and Development Expenses: - Q3 2022: Increased by 34% compared to Q3 2021. - Q1 2023: Increased by 40% compared to Q1 2022. - Q2 2022: Increased by 25% compared to Q2 2021. Overall, research and development expenses have been consistently increasing over the past three quarters. 2. Sales, General and Administrative Expenses: - Q3 2022: Increased by 8% compared to Q3 2021. - Q1 2023: Increased by 14% compared to Q1 2022. - Q2 2022: Decreased by 16% compared to Q2 2021. The pattern for sales, general and administrative expenses is not as consistent, with some quarters showing an increase and others showing a decrease. 3. Total Operating Expenses: - Q3 2022: Increased by 25% compared to Q3 2021. - Q1 2023: Increased by 113% compared to Q1 2022. - Q2 2022: Increased by 9% compared to Q2 2021. Total operating expenses have generally been increasing over the past three quarters, with a significant increase in Q1 2023. Overall, the pattern indicates a consistent | The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes. | The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes. ->: langchain.retrievers import KayAiRetrievermodel = ChatOpenAI(model_name="gpt-3.5-turbo")retriever = KayAiRetriever.create(dataset_id="company", data_types=["10-K", "10-Q"], num_contexts=6)qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What are patterns in Nvidia's spend over the past three quarters?", #"What are some recent challenges faced by the renewable energy sector?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are patterns in Nvidia's spend over the past three quarters? **Answer**: Based on the provided information, here are the patterns in NVIDIA's spend over the past three quarters: 1. Research and Development Expenses: - Q3 2022: Increased by 34% compared to Q3 2021. - Q1 2023: Increased by 40% compared to Q1 2022. - Q2 2022: Increased by 25% compared to Q2 2021. Overall, research and development expenses have been consistently increasing over the past three quarters. 2. Sales, General and Administrative Expenses: - Q3 2022: Increased by 8% compared to Q3 2021. - Q1 2023: Increased by 14% compared to Q1 2022. - Q2 2022: Decreased by 16% compared to Q2 2021. The pattern for sales, general and administrative expenses is not as consistent, with some quarters showing an increase and others showing a decrease. 3. Total Operating Expenses: - Q3 2022: Increased by 25% compared to Q3 2021. - Q1 2023: Increased by 113% compared to Q1 2022. - Q2 2022: Increased by 9% compared to Q2 2021. Total operating expenses have generally been increasing over the past three quarters, with a significant increase in Q1 2023. Overall, the pattern indicates a consistent |
2,102 | Overall, the pattern indicates a consistent increase in research and development expenses and total operating expenses, while sales, general and administrative expenses show some fluctuations. PreviousRePhraseQueryNextSelf-querying retrieverSetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes. | The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes. ->: Overall, the pattern indicates a consistent increase in research and development expenses and total operating expenses, while sales, general and administrative expenses show some fluctuations. PreviousRePhraseQueryNextSelf-querying retrieverSetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,103 | Chaindesk | 🦜�🔗 Langchain | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). ->: Chaindesk | 🦜�🔗 Langchain |
2,104 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversChaindeskOn this pageChaindeskChaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversChaindeskOn this pageChaindeskChaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). |
2,105 | Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Chaindesk API.This notebook shows how to use Chaindesk's retriever.First, you will need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.Query​Now that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import ChaindeskRetrieverretriever = ChaindeskRetriever( datastore_url="https://clg1xg2h80000l708dymr0fxc.chaindesk.ai/query", # api_key="CHAINDESK_API_KEY", # optional if datastore is public # top_k=10 # optional)retriever.get_relevant_documents("What is Daftpage?") [Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). ->: Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Chaindesk API.This notebook shows how to use Chaindesk's retriever.First, you will need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.Query​Now that our index is set up, we can set up a retriever and start querying it.from langchain.retrievers import ChaindeskRetrieverretriever = ChaindeskRetriever( datastore_url="https://clg1xg2h80000l708dymr0fxc.chaindesk.ai/query", # api_key="CHAINDESK_API_KEY", # optional if datastore is public # top_k=10 # optional)retriever.get_relevant_documents("What is Daftpage?") [Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 |
2,106 | PublishGuides🔖 Add a custom domainFeatures🔥 Drops� Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops� Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]PreviousBM25NextChatGPT PluginQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). ->: PublishGuides🔖 Add a custom domainFeatures🔥 Drops� Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops� Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]PreviousBM25NextChatGPT PluginQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,107 | Vespa | ü¶úÔ∏èüîó Langchain | Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. | Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: Vespa | ü¶úÔ∏èüîó Langchain |
2,108 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversVespaVespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.This notebook shows how to use Vespa.ai as a LangChain retriever.In order to create a retriever, we use pyvespa to
create a connection a Vespa service.#!pip install pyvespafrom vespa.application import Vespavespa_app = Vespa(url="https://doc-search.vespa.oath.cloud")This creates a connection to a Vespa service, here the Vespa documentation search service.
Using pyvespa package, you can also connect to a
Vespa Cloud instance
or a local
Docker instance.After connecting to the service, you can set up the retriever:from langchain.retrievers.vespa_retriever import VespaRetrievervespa_query_body = { "yql": "select content from paragraph where userQuery()", "hits": 5, "ranking": "documentation", "locale": "en-us",}vespa_content_field = "content"retriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field)This sets up a LangChain retriever that fetches documents from the Vespa application.
Here, up to 5 results are retrieved from the content field in the paragraph document type, | Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. | Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversVespaVespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.This notebook shows how to use Vespa.ai as a LangChain retriever.In order to create a retriever, we use pyvespa to
create a connection a Vespa service.#!pip install pyvespafrom vespa.application import Vespavespa_app = Vespa(url="https://doc-search.vespa.oath.cloud")This creates a connection to a Vespa service, here the Vespa documentation search service.
Using pyvespa package, you can also connect to a
Vespa Cloud instance
or a local
Docker instance.After connecting to the service, you can set up the retriever:from langchain.retrievers.vespa_retriever import VespaRetrievervespa_query_body = { "yql": "select content from paragraph where userQuery()", "hits": 5, "ranking": "documentation", "locale": "en-us",}vespa_content_field = "content"retriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field)This sets up a LangChain retriever that fetches documents from the Vespa application.
Here, up to 5 results are retrieved from the content field in the paragraph document type, |
2,109 | using doumentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.Please refer to the pyvespa documentation
for more information.Now you can return the results and continue using the results in LangChain.retriever.get_relevant_documents("what is vespa?")PreviousTF-IDFNextWeaviate Hybrid SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. | Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: using doumentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.Please refer to the pyvespa documentation
for more information.Now you can return the results and continue using the results in LangChain.retriever.get_relevant_documents("what is vespa?")PreviousTF-IDFNextWeaviate Hybrid SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,110 | LOTR (Merger Retriever) | ü¶úÔ∏èüîó Langchain | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. ->: LOTR (Merger Retriever) | ü¶úÔ∏èüîó Langchain |
2,111 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversLOTR (Merger Retriever)On this pageLOTR (Merger Retriever)Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.import osimport chromadbfrom langchain.retrievers.merger_retriever import MergerRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_transformers import ( EmbeddingsRedundantFilter, EmbeddingsClusteringFilter,)from langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain.retrievers import ContextualCompressionRetriever# Get 3 diff embeddings.all_mini = | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversLOTR (Merger Retriever)On this pageLOTR (Merger Retriever)Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.import osimport chromadbfrom langchain.retrievers.merger_retriever import MergerRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_transformers import ( EmbeddingsRedundantFilter, EmbeddingsClusteringFilter,)from langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain.retrievers import ContextualCompressionRetriever# Get 3 diff embeddings.all_mini = |
2,112 | Get 3 diff embeddings.all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")filter_embeddings = OpenAIEmbeddings()ABS_PATH = os.path.dirname(os.path.abspath(__file__))DB_DIR = os.path.join(ABS_PATH, "db")# Instantiate 2 diff cromadb indexs, each one with a diff embedding.client_settings = chromadb.config.Settings( is_persistent=True, persist_directory=DB_DIR, anonymized_telemetry=False,)db_all = Chroma( collection_name="project_store_all", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=all_mini,)db_multi_qa = Chroma( collection_name="project_store_multi", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=multi_qa_mini,)# Define 2 diff retrievers with 2 diff embeddings and diff search type.retriever_all = db_all.as_retriever( search_type="similarity", search_kwargs={"k": 5, "include_metadata": True})retriever_multi_qa = db_multi_qa.as_retriever( search_type="mmr", search_kwargs={"k": 5, "include_metadata": True})# The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other# retriever on different types of chains.lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])Remove redundant results from the merged retrievers.‚Äã# We can remove redundant results from both retrievers using yet another embedding.# Using multiples embeddings in diff steps could help reduce biases.filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)pipeline = DocumentCompressorPipeline(transformers=[filter])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Pick a representative sample of documents from the merged retrievers.‚Äã# This filter will divide the documents vectors into clusters or "centers" of meaning.# Then it will pick the closest document to that center for the final results.# By | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. ->: Get 3 diff embeddings.all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")filter_embeddings = OpenAIEmbeddings()ABS_PATH = os.path.dirname(os.path.abspath(__file__))DB_DIR = os.path.join(ABS_PATH, "db")# Instantiate 2 diff cromadb indexs, each one with a diff embedding.client_settings = chromadb.config.Settings( is_persistent=True, persist_directory=DB_DIR, anonymized_telemetry=False,)db_all = Chroma( collection_name="project_store_all", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=all_mini,)db_multi_qa = Chroma( collection_name="project_store_multi", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=multi_qa_mini,)# Define 2 diff retrievers with 2 diff embeddings and diff search type.retriever_all = db_all.as_retriever( search_type="similarity", search_kwargs={"k": 5, "include_metadata": True})retriever_multi_qa = db_multi_qa.as_retriever( search_type="mmr", search_kwargs={"k": 5, "include_metadata": True})# The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other# retriever on different types of chains.lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])Remove redundant results from the merged retrievers.‚Äã# We can remove redundant results from both retrievers using yet another embedding.# Using multiples embeddings in diff steps could help reduce biases.filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)pipeline = DocumentCompressorPipeline(transformers=[filter])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Pick a representative sample of documents from the merged retrievers.‚Äã# This filter will divide the documents vectors into clusters or "centers" of meaning.# Then it will pick the closest document to that center for the final results.# By |
2,113 | to that center for the final results.# By default the result document will be ordered/grouped by clusters.filter_ordered_cluster = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1,)# If you want the final document to be ordered by the original retriever scores# you need to add the "sorted" parameter.filter_ordered_by_retriever = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1, sorted=True,)pipeline = DocumentCompressorPipeline(transformers=[filter_ordered_by_retriever])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Re-order results to avoid performance degradation.‚ÄãNo matter the architecture of your model, there is a sustancial performance degradation when you include 10+ retrieved documents. | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. ->: to that center for the final results.# By default the result document will be ordered/grouped by clusters.filter_ordered_cluster = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1,)# If you want the final document to be ordered by the original retriever scores# you need to add the "sorted" parameter.filter_ordered_by_retriever = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1, sorted=True,)pipeline = DocumentCompressorPipeline(transformers=[filter_ordered_by_retriever])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)Re-order results to avoid performance degradation.‚ÄãNo matter the architecture of your model, there is a sustancial performance degradation when you include 10+ retrieved documents. |
2,114 | In brief: When models must access relevant information in the middle of long contexts, then tend to ignore the provided documents.
See: https://arxiv.org/abs//2307.03172# You can use an additional document transformer to reorder documents after removing redundance.from langchain.document_transformers import LongContextReorderfilter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)reordering = LongContextReorder()pipeline = DocumentCompressorPipeline(transformers=[filter, reordering])compression_retriever_reordered = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)PreviouskNNNextMetalRemove redundant results from the merged retrievers.Pick a representative sample of documents from the merged retrievers.Re-order results to avoid performance degradation.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. | Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. ->: In brief: When models must access relevant information in the middle of long contexts, then tend to ignore the provided documents.
See: https://arxiv.org/abs//2307.03172# You can use an additional document transformer to reorder documents after removing redundance.from langchain.document_transformers import LongContextReorderfilter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)reordering = LongContextReorder()pipeline = DocumentCompressorPipeline(transformers=[filter, reordering])compression_retriever_reordered = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)PreviouskNNNextMetalRemove redundant results from the merged retrievers.Pick a representative sample of documents from the merged retrievers.Re-order results to avoid performance degradation.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,115 | SVM | 🦜�🔗 Langchain | Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. | Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. ->: SVM | 🦜�🔗 Langchain |
2,116 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSVMOn this pageSVMSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.html#!pip install scikit-learn#!pip install larkWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.retrievers import SVMRetrieverfrom langchain.embeddings import OpenAIEmbeddingsCreate New Retriever with Texts​retriever = SVMRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousWeaviateNextTavily Search APICreate New Retriever with TextsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. | Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSVMOn this pageSVMSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.html#!pip install scikit-learn#!pip install larkWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ········from langchain.retrievers import SVMRetrieverfrom langchain.embeddings import OpenAIEmbeddingsCreate New Retriever with Texts​retriever = SVMRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousWeaviateNextTavily Search APICreate New Retriever with TextsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,117 | Weaviate | ü¶úÔ∏èüîó Langchain | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from ->: Weaviate | ü¶úÔ∏èüîó Langchain |
2,118 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverWeaviateOn this pageWeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverWeaviateOn this pageWeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from |
2,119 | your favorite ML models, and scale seamlessly into billions of data objects.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Weaviate vector store. Creating a Weaviate vector store‚ÄãFirst we'll want to create a Weaviate vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.#!pip install lark weaviate-clientfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Weaviateimport osembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Weaviate.from_documents( docs, | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from ->: your favorite ML models, and scale seamlessly into billions of data objects.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Weaviate vector store. Creating a Weaviate vector store‚ÄãFirst we'll want to create a Weaviate vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.#!pip install lark weaviate-clientfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Weaviateimport osembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Weaviate.from_documents( docs, |
2,120 | = Weaviate.from_documents( docs, embeddings, weaviate_url="http://127.0.0.1:8080")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from ->: = Weaviate.from_documents( docs, embeddings, weaviate_url="http://127.0.0.1:8080")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and |
2,121 | series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]PreviousVectaraNextSVMCreating a Weaviate vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from | Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from ->: series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]PreviousVectaraNextSVMCreating a Weaviate vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,122 | Supabase | ü¶úÔ∏èüîó Langchain | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: Supabase | ü¶úÔ∏èüîó Langchain |
2,123 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverSupabaseOn this pageSupabaseSupabase is an open-source Firebase alternative.
Supabase is built on top of PostgreSQL, which offers strong SQL
querying capabilities and enables a simple interface with already-existing tools and frameworks.PostgreSQL also known as Postgres,
is a free and open-source relational database management system (RDBMS)
emphasizing extensibility and SQL compliance.Supabase provides an open-source toolkit for developing AI applications | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverSupabaseOn this pageSupabaseSupabase is an open-source Firebase alternative.
Supabase is built on top of PostgreSQL, which offers strong SQL
querying capabilities and enables a simple interface with already-existing tools and frameworks.PostgreSQL also known as Postgres,
is a free and open-source relational database management system (RDBMS)
emphasizing extensibility and SQL compliance.Supabase provides an open-source toolkit for developing AI applications |
2,124 | using Postgres and pgvector. Use the Supabase client libraries to store, index, and query your vector embeddings at scale.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Supabase vector store.Specifically, we will:Create a Supabase databaseEnable the pgvector extensionCreate a documents table and match_documents function that will be used by SupabaseVectorStoreLoad sample documents into the vector store (database table)Build and test a self-querying retrieverSetup Supabase Database‚ÄãHead over to https://database.new to provision your Supabase database.In the studio, jump to the SQL editor and run the following script to enable pgvector and setup your database as a vector store:-- Enable the pgvector extension to work with embedding vectorscreate extension if not exists vector;-- Create a table to store your documentscreate table documents ( id uuid primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector (1536) -- 1536 works for OpenAI embeddings, change if needed );-- Create a function to search for documentscreate function match_documents ( query_embedding vector (1536), filter jsonb default '{}') returns table ( id uuid, content text, metadata jsonb, similarity float) language plpgsql as $$#variable_conflict use_columnbegin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding;end;$$;Creating a Supabase vector store‚ÄãNext we'll want to create a Supabase vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Be sure to install the latest version of langchain with openai support:%pip install langchain openai tiktokenThe self-query retriever requires you to have lark installed:%pip install larkWe also need the supabase package:%pip | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: using Postgres and pgvector. Use the Supabase client libraries to store, index, and query your vector embeddings at scale.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Supabase vector store.Specifically, we will:Create a Supabase databaseEnable the pgvector extensionCreate a documents table and match_documents function that will be used by SupabaseVectorStoreLoad sample documents into the vector store (database table)Build and test a self-querying retrieverSetup Supabase Database‚ÄãHead over to https://database.new to provision your Supabase database.In the studio, jump to the SQL editor and run the following script to enable pgvector and setup your database as a vector store:-- Enable the pgvector extension to work with embedding vectorscreate extension if not exists vector;-- Create a table to store your documentscreate table documents ( id uuid primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector (1536) -- 1536 works for OpenAI embeddings, change if needed );-- Create a function to search for documentscreate function match_documents ( query_embedding vector (1536), filter jsonb default '{}') returns table ( id uuid, content text, metadata jsonb, similarity float) language plpgsql as $$#variable_conflict use_columnbegin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding;end;$$;Creating a Supabase vector store‚ÄãNext we'll want to create a Supabase vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Be sure to install the latest version of langchain with openai support:%pip install langchain openai tiktokenThe self-query retriever requires you to have lark installed:%pip install larkWe also need the supabase package:%pip |
2,125 | larkWe also need the supabase package:%pip install supabaseSince we are using SupabaseVectorStore and OpenAIEmbeddings, we have to load their API keys.To find your SUPABASE_URL and SUPABASE_SERVICE_KEY, head to your Supabase project's API settings.SUPABASE_URL corresponds to the Project URLSUPABASE_SERVICE_KEY corresponds to the service_role API keyTo get your OPENAI_API_KEY, navigate to API keys on your OpenAI account and create a new secret key.import osimport getpassos.environ["SUPABASE_URL"] = getpass.getpass("Supabase URL:")os.environ["SUPABASE_SERVICE_KEY"] = getpass.getpass("Supabase Service Key:")os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Optional: If you're storing your Supabase and OpenAI API keys in a .env file, you can load them with dotenv.%pip install python-dotenvfrom dotenv import load_dotenvload_dotenv()First we'll create a Supabase client and instantiate a OpenAI embeddings class.import osfrom supabase.client import Client, create_clientfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import SupabaseVectorStoresupabase_url = os.environ.get("SUPABASE_URL")supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")supabase: Client = create_client(supabase_url, supabase_key)embeddings = OpenAIEmbeddings()Next let's create our documents.docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: larkWe also need the supabase package:%pip install supabaseSince we are using SupabaseVectorStore and OpenAIEmbeddings, we have to load their API keys.To find your SUPABASE_URL and SUPABASE_SERVICE_KEY, head to your Supabase project's API settings.SUPABASE_URL corresponds to the Project URLSUPABASE_SERVICE_KEY corresponds to the service_role API keyTo get your OPENAI_API_KEY, navigate to API keys on your OpenAI account and create a new secret key.import osimport getpassos.environ["SUPABASE_URL"] = getpass.getpass("Supabase URL:")os.environ["SUPABASE_SERVICE_KEY"] = getpass.getpass("Supabase Service Key:")os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Optional: If you're storing your Supabase and OpenAI API keys in a .env file, you can load them with dotenv.%pip install python-dotenvfrom dotenv import load_dotenvload_dotenv()First we'll create a Supabase client and instantiate a OpenAI embeddings class.import osfrom supabase.client import Client, create_clientfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import SupabaseVectorStoresupabase_url = os.environ.get("SUPABASE_URL")supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")supabase: Client = create_client(supabase_url, supabase_key)embeddings = OpenAIEmbeddings()Next let's create our documents.docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( |
2,126 | Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = SupabaseVectorStore.from_documents(docs, embeddings, client=supabase, table_name="documents", query_name="match_documents")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = SupabaseVectorStore.from_documents(docs, embeddings, client=supabase, table_name="documents", query_name="match_documents")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a |
2,127 | our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women?") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]# This example specifies a composite | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women?") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]# This example specifies a composite |
2,128 | Gerwig'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before (or on) 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LTE: 'lte'>, attribute='year', value=2005), Comparison(comparator=<Comparator.LIKE: 'like'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: Gerwig'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before (or on) 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LTE: 'lte'>, attribute='year', value=2005), Comparison(comparator=<Comparator.LIKE: 'like'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': |
2,129 | blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousRedisNextTimescale Vector (Postgres) self-queryingSetup Supabase DatabaseCreating a Supabase vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Supabase is an open-source Firebase alternative. | Supabase is an open-source Firebase alternative. ->: blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousRedisNextTimescale Vector (Postgres) self-queryingSetup Supabase DatabaseCreating a Supabase vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,130 | MyScale | ü¶úÔ∏èüîó Langchain | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. ->: MyScale | ü¶úÔ∏èüîó Langchain |
2,131 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverMyScaleOn this pageMyScaleMyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverMyScaleOn this pageMyScaleMyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. |
2,132 | MyScale can make use of various data types and functions for filters. It will boost up your LLM app no matter if you are scaling up your data or expand your system to broader application.In the notebook, we'll demo the SelfQueryRetriever wrapped around a MyScale vector store with some extra pieces we contributed to LangChain. In short, it can be condensed into 4 points:Add contain comparator to match the list of any if there is more than one element matchedAdd timestamp data type for datetime match (ISO-format, or YYYY-MM-DD)Add like comparator for string pattern searchAdd arbitrary function capabilityCreating a MyScale vector store‚ÄãMyScale has already been integrated to LangChain for a while. So you can follow this notebook to create your own vectorstore for a self-query retriever.Note: All self-query retrievers requires you to have lark installed (pip install lark). We use lark for grammar definition. Before you proceed to the next step, we also want to remind you that clickhouse-connect is also needed to interact with your MyScale backend.pip install lark clickhouse-connectIn this tutorial we follow other example's setting and use OpenAIEmbeddings. Remember to get an OpenAI API Key for valid access to LLMs.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["MYSCALE_HOST"] = getpass.getpass("MyScale URL:")os.environ["MYSCALE_PORT"] = getpass.getpass("MyScale Port:")os.environ["MYSCALE_USERNAME"] = getpass.getpass("MyScale Username:")os.environ["MYSCALE_PASSWORD"] = getpass.getpass("MyScale Password:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import MyScaleembeddings = OpenAIEmbeddings()Create some sample data‚ÄãAs you can see, the data we created has some differences compared to other self-query retrievers. We replaced the keyword year with date which gives you finer control on timestamps. We also changed the type of the keyword gerne | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. ->: MyScale can make use of various data types and functions for filters. It will boost up your LLM app no matter if you are scaling up your data or expand your system to broader application.In the notebook, we'll demo the SelfQueryRetriever wrapped around a MyScale vector store with some extra pieces we contributed to LangChain. In short, it can be condensed into 4 points:Add contain comparator to match the list of any if there is more than one element matchedAdd timestamp data type for datetime match (ISO-format, or YYYY-MM-DD)Add like comparator for string pattern searchAdd arbitrary function capabilityCreating a MyScale vector store‚ÄãMyScale has already been integrated to LangChain for a while. So you can follow this notebook to create your own vectorstore for a self-query retriever.Note: All self-query retrievers requires you to have lark installed (pip install lark). We use lark for grammar definition. Before you proceed to the next step, we also want to remind you that clickhouse-connect is also needed to interact with your MyScale backend.pip install lark clickhouse-connectIn this tutorial we follow other example's setting and use OpenAIEmbeddings. Remember to get an OpenAI API Key for valid access to LLMs.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["MYSCALE_HOST"] = getpass.getpass("MyScale URL:")os.environ["MYSCALE_PORT"] = getpass.getpass("MyScale Port:")os.environ["MYSCALE_USERNAME"] = getpass.getpass("MyScale Username:")os.environ["MYSCALE_PASSWORD"] = getpass.getpass("MyScale Password:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import MyScaleembeddings = OpenAIEmbeddings()Create some sample data‚ÄãAs you can see, the data we created has some differences compared to other self-query retrievers. We replaced the keyword year with date which gives you finer control on timestamps. We also changed the type of the keyword gerne |
2,133 | We also changed the type of the keyword gerne to a list of strings, where an LLM can use a new contain comparator to construct filters. We also provide the like comparator and arbitrary function support to filters, which will be introduced in next few cells.Now let's look at the data first.docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"date": "1993-07-02", "rating": 7.7, "genre": ["science fiction"]}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"date": "2010-12-30", "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"date": "2006-04-23", "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"date": "2019-08-22", "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"date": "1995-02-11", "genre": ["animated"]}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "date": "1979-09-10", "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "adventure"], "rating": 9.9, }, ),]vectorstore = MyScale.from_documents( docs, embeddings,)Creating our self-querying retriever‚ÄãJust like other retrievers... simple and nice.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. ->: We also changed the type of the keyword gerne to a list of strings, where an LLM can use a new contain comparator to construct filters. We also provide the like comparator and arbitrary function support to filters, which will be introduced in next few cells.Now let's look at the data first.docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"date": "1993-07-02", "rating": 7.7, "genre": ["science fiction"]}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"date": "2010-12-30", "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"date": "2006-04-23", "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"date": "2019-08-22", "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"date": "1995-02-11", "genre": ["animated"]}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "date": "1979-09-10", "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "adventure"], "rating": 9.9, }, ),]vectorstore = MyScale.from_documents( docs, embeddings,)Creating our self-querying retriever‚ÄãJust like other retrievers... simple and nice.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", |
2,134 | = [ AttributeInfo( name="genre", description="The genres of the movie", type="list[string]", ), # If you want to include length of a list, just define it as a new column # This will teach the LLM to use it as a column when constructing filter. AttributeInfo( name="length(genre)", description="The length of genres of the movie", type="integer", ), # Now you can define a column as timestamp. By simply set the type to timestamp. AttributeInfo( name="date", description="The date the movie was released", type="timestamp", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out with self-query retriever's existing functionalities‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")Wait a second... what else?Self-query retriever with MyScale can do more! Let's find out.# You can use length(genres) to do anything you | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. ->: = [ AttributeInfo( name="genre", description="The genres of the movie", type="list[string]", ), # If you want to include length of a list, just define it as a new column # This will teach the LLM to use it as a column when constructing filter. AttributeInfo( name="length(genre)", description="The length of genres of the movie", type="integer", ), # Now you can define a column as timestamp. By simply set the type to timestamp. AttributeInfo( name="date", description="The date the movie was released", type="timestamp", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out with self-query retriever's existing functionalities‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")Wait a second... what else?Self-query retriever with MyScale can do more! Let's find out.# You can use length(genres) to do anything you |
2,135 | You can use length(genres) to do anything you wantretriever.get_relevant_documents("What's a movie that have more than 1 genres?")# Fine-grained datetime? You got it already.retriever.get_relevant_documents("What's a movie that release after feb 1995?")# Don't know what your exact filter should be? Use string pattern match!retriever.get_relevant_documents("What's a movie whose name is like Andrei?")# Contain works for lists: so you can match a list with contain comparator!retriever.get_relevant_documents( "What's a movie who has genres science fiction and adventure?")Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")PreviousMilvusNextOpenSearchCreating a MyScale vector storeCreate some sample dataCreating our self-querying retrieverTesting it out with self-query retriever's existing functionalitiesFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. | MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. ->: You can use length(genres) to do anything you wantretriever.get_relevant_documents("What's a movie that have more than 1 genres?")# Fine-grained datetime? You got it already.retriever.get_relevant_documents("What's a movie that release after feb 1995?")# Don't know what your exact filter should be? Use string pattern match!retriever.get_relevant_documents("What's a movie whose name is like Andrei?")# Contain works for lists: so you can match a list with contain comparator!retriever.get_relevant_documents( "What's a movie who has genres science fiction and adventure?")Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")PreviousMilvusNextOpenSearchCreating a MyScale vector storeCreate some sample dataCreating our self-querying retrieverTesting it out with self-query retriever's existing functionalitiesFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,136 | Redis | ü¶úÔ∏èüîó Langchain | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. ->: Redis | ü¶úÔ∏èüîó Langchain |
2,137 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverRedisOn this pageRedisRedis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Redis vector store. Creating a Redis vector store‚ÄãFirst we'll want to create a Redis vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark) along with integration-specific requirements.# !pip install redis redisvl openai tiktoken larkWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Redisembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "director": "Steven Spielberg", | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverRedisOn this pageRedisRedis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Redis vector store. Creating a Redis vector store‚ÄãFirst we'll want to create a Redis vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark) along with integration-specific requirements.# !pip install redis redisvl openai tiktoken larkWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Redisembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "director": "Steven Spielberg", |
2,138 | "rating": 7.7, "director": "Steven Spielberg", "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "genre": "science fiction", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "genre": "science fiction", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "genre": "drama", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "director": "John Lasseter", "genre": "animated", "rating": 9.1,}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]index_schema = { "tag": [{"name": "genre"}], "text": [{"name": "director"}], "numeric": [{"name": "year"}, {"name": "rating"}],}vectorstore = Redis.from_documents( docs, embeddings, redis_url="redis://localhost:6379", index_name="movie_reviews", index_schema=index_schema,) `index_schema` does not match generated metadata schema. If you meant to manually override the schema, please ignore this message. index_schema: {'tag': [{'name': 'genre'}], 'text': [{'name': 'director'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}]} generated_schema: {'text': [{'name': 'director'}, {'name': 'genre'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}], 'tag': []} Creating our self-querying retriever‚ÄãNow we can instantiate | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. ->: "rating": 7.7, "director": "Steven Spielberg", "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "genre": "science fiction", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "genre": "science fiction", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "genre": "drama", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "director": "John Lasseter", "genre": "animated", "rating": 9.1,}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]index_schema = { "tag": [{"name": "genre"}], "text": [{"name": "director"}], "numeric": [{"name": "year"}, {"name": "rating"}],}vectorstore = Redis.from_documents( docs, embeddings, redis_url="redis://localhost:6379", index_name="movie_reviews", index_schema=index_schema,) `index_schema` does not match generated metadata schema. If you meant to manually override the schema, please ignore this message. index_schema: {'tag': [{'name': 'genre'}], 'text': [{'name': 'director'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}]} generated_schema: {'text': [{'name': 'director'}, {'name': 'genre'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}], 'tag': []} Creating our self-querying retriever‚ÄãNow we can instantiate |
2,139 | self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. ->: self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', |
2,140 | 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.4") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.4) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. ->: 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.4") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.4) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and |
2,141 | of normal-sized women are supremely wholesome and some men pine after them', metadata={'id': 'doc:movie_reviews:bb899807b93c442083fd45e75a4779d5', 'director': 'Greta Gerwig', 'genre': 'drama', 'year': '2019', 'rating': '8.3'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})]Filter k‚ÄãWe can also use the self query retriever to specify k: | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. ->: of normal-sized women are supremely wholesome and some men pine after them', metadata={'id': 'doc:movie_reviews:bb899807b93c442083fd45e75a4779d5', 'director': 'Greta Gerwig', 'genre': 'drama', 'year': '2019', 'rating': '8.3'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})]Filter k‚ÄãWe can also use the self query retriever to specify k: |
2,142 | also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})]PreviousQdrantNextSupabaseCreating a Redis vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. | Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more. ->: also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})]PreviousQdrantNextSupabaseCreating a Redis vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,143 | Qdrant | ü¶úÔ∏èüîó Langchain | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. ->: Qdrant | ü¶úÔ∏èüîó Langchain |
2,144 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverQdrantOn this pageQdrantQdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Qdrant vector store. Creating a Qdrant vector store‚ÄãFirst we'll want to create a Qdrant vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the qdrant-client package.#!pip install lark qdrant-clientWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.# import os# import getpass# os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Qdrantembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverQdrantOn this pageQdrantQdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Qdrant vector store. Creating a Qdrant vector store‚ÄãFirst we'll want to create a Qdrant vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the qdrant-client package.#!pip install lark qdrant-clientWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.# import os# import getpass# os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Qdrantembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back |
2,145 | page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectorstore = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents",)Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. ->: page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectorstore = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents",)Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was |
2,146 | description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. ->: description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', |
2,147 | within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. ->: within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, |
2,148 | llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousPineconeNextRedisCreating a Qdrant vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. | Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. ->: llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousPineconeNextRedisCreating a Qdrant vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,149 | Deep Lake | ü¶úÔ∏èüîó Langchain | Deep Lake is a multimodal database for building AI applications | Deep Lake is a multimodal database for building AI applications ->: Deep Lake | ü¶úÔ∏èüîó Langchain |
2,150 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverDeep LakeOn this pageDeep LakeDeep Lake is a multimodal database for building AI applications
Deep Lake is a database for AI.
Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, | Deep Lake is a multimodal database for building AI applications | Deep Lake is a multimodal database for building AI applications ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverDeep LakeOn this pageDeep LakeDeep Lake is a multimodal database for building AI applications
Deep Lake is a database for AI.
Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, |
2,151 | & visualize any AI data. Stream data in real time to PyTorch/TensorFlow.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Deep Lake vector store. Creating a Deep Lake vector store‚ÄãFirst we'll want to create a Deep Lake vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the deeplake package.# !pip install lark# in case if some queries fail consider installing libdeeplake manually# !pip install libdeeplakeWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("Activeloop token:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import DeepLakeembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk | Deep Lake is a multimodal database for building AI applications | Deep Lake is a multimodal database for building AI applications ->: & visualize any AI data. Stream data in real time to PyTorch/TensorFlow.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Deep Lake vector store. Creating a Deep Lake vector store‚ÄãFirst we'll want to create a Deep Lake vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the deeplake package.# !pip install lark# in case if some queries fail consider installing libdeeplake manually# !pip install libdeeplakeWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("Activeloop token:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import DeepLakeembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk |
2,152 | Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]username_or_org = "<USERNAME_OR_ORG>"vectorstore = DeepLake.from_documents( docs, embeddings, dataset_path=f"hub://{username_or_org}/self_queery", overwrite=True,) Your Deep Lake dataset has been successfully created! / Dataset(path='hub://adilkhan/self_queery', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (6, 1536) float32 None id text (6, 1) str None metadata json (6, 1) str None text text (6, 1) str None Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, | Deep Lake is a multimodal database for building AI applications | Deep Lake is a multimodal database for building AI applications ->: Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]username_or_org = "<USERNAME_OR_ORG>"vectorstore = DeepLake.from_documents( docs, embeddings, dataset_path=f"hub://{username_or_org}/self_queery", overwrite=True,) Your Deep Lake dataset has been successfully created! / Dataset(path='hub://adilkhan/self_queery', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (6, 1536) float32 None id text (6, 1) str None metadata json (6, 1) str None text text (6, 1) str None Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, |
2,153 | SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/chains/llm.py:279: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")# in case if this example errored out, consider installing libdeeplake manually: `pip install libdeeplake`, and then restart notebook. query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': | Deep Lake is a multimodal database for building AI applications | Deep Lake is a multimodal database for building AI applications ->: SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/chains/llm.py:279: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")# in case if this example errored out, consider installing libdeeplake manually: `pip install libdeeplake`, and then restart notebook. query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': |
2,154 | 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, | Deep Lake is a multimodal database for building AI applications | Deep Lake is a multimodal database for building AI applications ->: 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, |
2,155 | metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousSelf-querying retrieverNextChromaCreating a Deep Lake vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Deep Lake is a multimodal database for building AI applications | Deep Lake is a multimodal database for building AI applications ->: metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousSelf-querying retrieverNextChromaCreating a Deep Lake vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,156 | Pinecone | ü¶úÔ∏èüîó Langchain | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: Pinecone | ü¶úÔ∏èüîó Langchain |
2,157 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverPineconeOn this pagePineconePinecone is a vector database with broad functionality.In the walkthrough, we'll demo the SelfQueryRetriever with a Pinecone vector store.Creating a Pinecone index‚ÄãFirst we'll want to create a Pinecone vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.To use Pinecone, you have to have pinecone package installed and you must have an API key and an environment. Here are the installation instructions.Note: The self-query retriever requires you to have lark package installed.# !pip install lark#!pip install pinecone-clientimport osimport pineconepinecone.init( api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"]) /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) from tqdm.autonotebook import tqdmfrom langchain.schema import Documentfrom langchain.embeddings.openai | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverPineconeOn this pagePineconePinecone is a vector database with broad functionality.In the walkthrough, we'll demo the SelfQueryRetriever with a Pinecone vector store.Creating a Pinecone index‚ÄãFirst we'll want to create a Pinecone vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.To use Pinecone, you have to have pinecone package installed and you must have an API key and an environment. Here are the installation instructions.Note: The self-query retriever requires you to have lark package installed.# !pip install lark#!pip install pinecone-clientimport osimport pineconepinecone.init( api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"]) /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) from tqdm.autonotebook import tqdmfrom langchain.schema import Documentfrom langchain.embeddings.openai |
2,158 | import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Pineconeembeddings = OpenAIEmbeddings()# create new indexpinecone.create_index("langchain-self-retriever-demo", dimension=1536)docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9, }, ),]vectorstore = Pinecone.from_documents( docs, embeddings, index_name="langchain-self-retriever-demo")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Pineconeembeddings = OpenAIEmbeddings()# create new indexpinecone.create_index("langchain-self-retriever-demo", dimension=1536)docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9, }, ),]vectorstore = Pinecone.from_documents( docs, embeddings, index_name="langchain-self-retriever-demo")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base |
2,159 | langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams |
2,160 | gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': |
2,161 | doing so', metadata={'genre': 'animated', 'year': 1995.0})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs")PreviousOpenSearchNextQdrantCreating a Pinecone indexCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: doing so', metadata={'genre': 'animated', 'year': 1995.0})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs")PreviousOpenSearchNextQdrantCreating a Pinecone indexCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,162 | OpenSearch | ü¶úÔ∏èüîó Langchain | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: OpenSearch | ü¶úÔ∏èüîó Langchain |
2,163 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverOpenSearchOn this pageOpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.In this notebook, we'll demo the SelfQueryRetriever with an OpenSearch vector store.Creating an OpenSearch vector store‚ÄãFirst, we'll want to create an OpenSearch vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the opensearch-py package.pip install lark opensearch-pyfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import OpenSearchVectorSearchimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")embeddings = OpenAIEmbeddings() OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverOpenSearchOn this pageOpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.In this notebook, we'll demo the SelfQueryRetriever with an OpenSearch vector store.Creating an OpenSearch vector store‚ÄãFirst, we'll want to create an OpenSearch vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the opensearch-py package.pip install lark opensearch-pyfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import OpenSearchVectorSearchimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")embeddings = OpenAIEmbeddings() OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem |
2,164 | of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectorstore = OpenSearchVectorSearch.from_documents( docs, embeddings, index_name="opensearch-self-query-demo", opensearch_url="http://localhost:9200")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectorstore = OpenSearchVectorSearch.from_documents( docs, embeddings, index_name="opensearch-self-query-demo", opensearch_url="http://localhost:9200")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", |
2,165 | the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': |
2,166 | 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Complex queries in Action!‚ÄãWe've tried out some simple queries, but what about more complex ones? Let's try out a few more complex queries that utilize the full | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Complex queries in Action!‚ÄãWe've tried out some simple queries, but what about more complex ones? Let's try out a few more complex queries that utilize the full |
2,167 | a few more complex queries that utilize the full power of OpenSearch.retriever.get_relevant_documents("what animated or comedy movies have been released in the last 30 years about animated toys?") query='animated toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.OR: 'or'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='comedy')]), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990)]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]vectorstore.client.indices.delete(index="opensearch-self-query-demo") {'acknowledged': True}PreviousMyScaleNextPineconeCreating an OpenSearch vector storeCreating our self-querying retrieverTesting it outFilter kComplex queries in Action!CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: a few more complex queries that utilize the full power of OpenSearch.retriever.get_relevant_documents("what animated or comedy movies have been released in the last 30 years about animated toys?") query='animated toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.OR: 'or'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='comedy')]), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990)]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]vectorstore.client.indices.delete(index="opensearch-self-query-demo") {'acknowledged': True}PreviousMyScaleNextPineconeCreating an OpenSearch vector storeCreating our self-querying retrieverTesting it outFilter kComplex queries in Action!CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,168 | Milvus | ü¶úÔ∏èüîó Langchain | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: Milvus | ü¶úÔ∏èüîó Langchain |
2,169 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverMilvusOn this pageMilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.In the walkthrough, we'll demo the SelfQueryRetriever with a Milvus vector store.Creating a Milvus vectorstore‚ÄãFirst we'll want to create a Milvus VectorStore and seed it with some data. We've created a small demo set of documents that contain summaries of movies.I have used the cloud version of Milvus, thus I need uri and token as well.NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the pymilvus package.#!pip install lark#!pip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osOPENAI_API_KEY = "Use your OpenAI key:)"os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Milvusembeddings = OpenAIEmbeddings()docs = [ Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverMilvusOn this pageMilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.In the walkthrough, we'll demo the SelfQueryRetriever with a Milvus vector store.Creating a Milvus vectorstore‚ÄãFirst we'll want to create a Milvus VectorStore and seed it with some data. We've created a small demo set of documents that contain summaries of movies.I have used the cloud version of Milvus, thus I need uri and token as well.NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the pymilvus package.#!pip install lark#!pip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osOPENAI_API_KEY = "Use your OpenAI key:)"os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Milvusembeddings = OpenAIEmbeddings()docs = [ Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", |
2,170 | bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}), Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010,"genre": "thriller", "rating": 8.2}), Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "rating": 8.3, "genre": "drama"}), Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "genre": "science fiction"}), Document( page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={"year": 2006, "genre": "thriller", 'rating': 9.0}, ), Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated", "rating": 9.3 }),]vector_store = Milvus.from_documents( docs, embedding=embeddings, connection_args={"uri": 'Use your uri:)', "token":'Use your token:)'})Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}), Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010,"genre": "thriller", "rating": 8.2}), Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "rating": 8.3, "genre": "drama"}), Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "genre": "science fiction"}), Document( page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={"year": 2006, "genre": "thriller", 'rating': 9.0}, ), Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated", "rating": 9.3 }),]vector_store = Milvus.from_documents( docs, embedding=embeddings, connection_args={"uri": 'Use your uri:)', "token":'Use your token:)'})Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, |
2,171 | = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]# This example specifies a filterretriever.get_relevant_documents("What are some highly rated movies (above 9)?") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]# This example only specifies a query and a filterretriever.get_relevant_documents("I want to watch a movie about toys rated higher than 9") query='toys' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]# This example specifies a filterretriever.get_relevant_documents("What are some highly rated movies (above 9)?") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]# This example only specifies a query and a filterretriever.get_relevant_documents("I want to watch a movie about toys rated higher than 9") query='toys' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': |
2,172 | men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]# This example specifies a composite filterretriever.get_relevant_documents("What's a highly rated (above or equal 9) thriller film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='thriller'), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=9)]) limit=None [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about dinosaurs, \ and preferably has a lot of action") query='dinosaur' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='action')]) limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True, enable_limit=True)# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs?") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]# This example specifies a composite filterretriever.get_relevant_documents("What's a highly rated (above or equal 9) thriller film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='thriller'), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=9)]) limit=None [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about dinosaurs, \ and preferably has a lot of action") query='dinosaur' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='action')]) limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True, enable_limit=True)# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs?") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': |
2,173 | metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'})]PreviousElasticsearchNextMyScaleCreating a Milvus vectorstoreCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'})]PreviousElasticsearchNextMyScaleCreating a Milvus vectorstoreCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,174 | Chroma | ü¶úÔ∏èüîó Langchain | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Chroma | ü¶úÔ∏èüîó Langchain |
2,175 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverChromaOn this pageChromaChroma is a database for building AI applications with embeddings.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Chroma vector store. Creating a Chroma vector store‚ÄãFirst we'll want to create a Chroma vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package.#!pip install lark#!pip install chromadbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromaembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverChromaOn this pageChromaChroma is a database for building AI applications with embeddings.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Chroma vector store. Creating a Chroma vector store‚ÄãFirst we'll want to create a Chroma vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package.#!pip install lark#!pip install chromadbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromaembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( |
2,176 | "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Chroma.from_documents(docs, embeddings) Using embedded DuckDB without persistence: data will be transientCreating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Chroma.from_documents(docs, embeddings) Using embedded DuckDB without persistence: data will be transientCreating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", |
2,177 | of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about |
2,178 | Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None |
2,179 | dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]PreviousDeep LakeNextDashVectorCreating a Chroma vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]PreviousDeep LakeNextDashVectorCreating a Chroma vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,180 | DashVector | ü¶úÔ∏èüîó Langchain | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. ->: DashVector | ü¶úÔ∏èüîó Langchain |
2,181 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverDashVectorOn this pageDashVectorDashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.
The vector retrieval service DashVector is based on the Proxima core of the efficient vector engine independently developed by DAMO Academy,
and provides a cloud-native, fully managed vector retrieval service with horizontal expansion capabilities.
DashVector exposes its powerful vector management, vector query and other diversified capabilities through a simple and
easy-to-use SDK/API interface, which can be quickly integrated by upper-layer AI applications, thereby providing services
including large model ecology, multi-modal AI search, molecular structure A variety of application scenarios, including analysis, | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverDashVectorOn this pageDashVectorDashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.
The vector retrieval service DashVector is based on the Proxima core of the efficient vector engine independently developed by DAMO Academy,
and provides a cloud-native, fully managed vector retrieval service with horizontal expansion capabilities.
DashVector exposes its powerful vector management, vector query and other diversified capabilities through a simple and
easy-to-use SDK/API interface, which can be quickly integrated by upper-layer AI applications, thereby providing services
including large model ecology, multi-modal AI search, molecular structure A variety of application scenarios, including analysis, |
2,182 | provide the required efficient vector retrieval capabilities.In this notebook, we'll demo the SelfQueryRetriever with a DashVector vector store.Create DashVector vectorstore‚ÄãFirst we'll want to create a DashVector VectorStore and seed it with some data. We've created a small demo set of documents that contain summaries of movies.To use DashVector, you have to have dashvector package installed, and you must have an API key and an Environment. Here are the installation instructions.NOTE: The self-query retriever requires you to have lark package installed.# !pip install lark dashvectorimport osimport dashvectorclient = dashvector.Client(api_key=os.environ["DASHVECTOR_API_KEY"])from langchain.schema import Documentfrom langchain.embeddings import DashScopeEmbeddingsfrom langchain.vectorstores import DashVectorembeddings = DashScopeEmbeddings()# create DashVector collectionclient.create("langchain-self-retriever-demo", dimension=1536)docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. ->: provide the required efficient vector retrieval capabilities.In this notebook, we'll demo the SelfQueryRetriever with a DashVector vector store.Create DashVector vectorstore‚ÄãFirst we'll want to create a DashVector VectorStore and seed it with some data. We've created a small demo set of documents that contain summaries of movies.To use DashVector, you have to have dashvector package installed, and you must have an API key and an Environment. Here are the installation instructions.NOTE: The self-query retriever requires you to have lark package installed.# !pip install lark dashvectorimport osimport dashvectorclient = dashvector.Client(api_key=os.environ["DASHVECTOR_API_KEY"])from langchain.schema import Documentfrom langchain.embeddings import DashScopeEmbeddingsfrom langchain.vectorstores import DashVectorembeddings = DashScopeEmbeddings()# create DashVector collectionclient.create("langchain-self-retriever-demo", dimension=1536)docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", |
2,183 | the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = DashVector.from_documents( docs, embeddings, collection_name="langchain-self-retriever-demo")Create your self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import Tongyifrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = Tongyi(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaurs' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Leo DiCaprio gets lost in a | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. ->: the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = DashVector.from_documents( docs, embeddings, collection_name="langchain-self-retriever-demo")Create your self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import Tongyifrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = Tongyi(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaurs' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Leo DiCaprio gets lost in a |
2,184 | DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.199999809265137}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='Greta Gerwig' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.300000190734863})]# This example specifies a composite filterretriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?") query='science fiction' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. ->: DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.199999809265137}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='Greta Gerwig' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.300000190734863})]# This example specifies a composite filterretriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?") query='science fiction' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', |
2,185 | into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaurs' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousChromaNextElasticsearchCreate DashVector vectorstoreCreate your self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. | DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. ->: into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaurs' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]PreviousChromaNextElasticsearchCreate DashVector vectorstoreCreate your self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,186 | Elasticsearch | ü¶úÔ∏èüîó Langchain | Elasticsearch is a distributed, RESTful search and analytics engine. | Elasticsearch is a distributed, RESTful search and analytics engine. ->: Elasticsearch | ü¶úÔ∏èüîó Langchain |
2,187 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverElasticsearchOn this pageElasticsearchElasticsearch is a distributed, RESTful search and analytics engine.
It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free | Elasticsearch is a distributed, RESTful search and analytics engine. | Elasticsearch is a distributed, RESTful search and analytics engine. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverElasticsearchOn this pageElasticsearchElasticsearch is a distributed, RESTful search and analytics engine.
It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free |
2,188 | JSON documents.In this notebook, we'll demo the SelfQueryRetriever with an Elasticsearch vector store.Creating an Elasticsearch vector store‚ÄãFirst, we'll want to create an Elasticsearch vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the elasticsearch package.#!pip install lark elasticsearchfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import ElasticsearchStoreimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")embeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = | Elasticsearch is a distributed, RESTful search and analytics engine. | Elasticsearch is a distributed, RESTful search and analytics engine. ->: JSON documents.In this notebook, we'll demo the SelfQueryRetriever with an Elasticsearch vector store.Creating an Elasticsearch vector store‚ÄãFirst, we'll want to create an Elasticsearch vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the elasticsearch package.#!pip install lark elasticsearchfrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import ElasticsearchStoreimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")embeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = |
2,189 | "rating": 9.9, }, ),]vectorstore = ElasticsearchStore.from_documents( docs, embeddings, index_name="elasticsearch-self-query-demo", es_url="http://localhost:9200")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), | Elasticsearch is a distributed, RESTful search and analytics engine. | Elasticsearch is a distributed, RESTful search and analytics engine. ->: "rating": 9.9, }, ),]vectorstore = ElasticsearchStore.from_documents( docs, embeddings, index_name="elasticsearch-self-query-demo", es_url="http://localhost:9200")Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), |
2,190 | Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Complex queries in Action!‚ÄãWe've tried out some simple queries, but what about more complex ones? Let's try out a few more complex queries that utilize the full power of Elasticsearch.retriever.get_relevant_documents("what animated or comedy movies have been released in the last 30 years about animated toys?") query='animated toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.OR: 'or'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated'), | Elasticsearch is a distributed, RESTful search and analytics engine. | Elasticsearch is a distributed, RESTful search and analytics engine. ->: Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Complex queries in Action!‚ÄãWe've tried out some simple queries, but what about more complex ones? Let's try out a few more complex queries that utilize the full power of Elasticsearch.retriever.get_relevant_documents("what animated or comedy movies have been released in the last 30 years about animated toys?") query='animated toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.OR: 'or'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated'), |
2,191 | 'eq'>, attribute='genre', value='animated'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='comedy')]), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990)]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]vectorstore.client.indices.delete(index="elasticsearch-self-query-demo") ObjectApiResponse({'acknowledged': True})PreviousDashVectorNextMilvusCreating an Elasticsearch vector storeCreating our self-querying retrieverTesting it outFilter kComplex queries in Action!CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Elasticsearch is a distributed, RESTful search and analytics engine. | Elasticsearch is a distributed, RESTful search and analytics engine. ->: 'eq'>, attribute='genre', value='animated'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='comedy')]), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990)]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]vectorstore.client.indices.delete(index="elasticsearch-self-query-demo") ObjectApiResponse({'acknowledged': True})PreviousDashVectorNextMilvusCreating an Elasticsearch vector storeCreating our self-querying retrieverTesting it outFilter kComplex queries in Action!CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
2,192 | Timescale Vector (Postgres) self-querying | ü¶úÔ∏èüîó Langchain | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: Timescale Vector (Postgres) self-querying | ü¶úÔ∏èüîó Langchain |
2,193 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverTimescale Vector (Postgres) self-queryingOn this pageTimescale Vector (Postgres) self-queryingTimescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL.This notebook shows how to use the Postgres vector database (TimescaleVector) to perform self-querying. In the notebook we'll demo the SelfQueryRetriever wrapped around a TimescaleVector vector store. What is Timescale Vector?‚ÄãTimescale Vector is PostgreSQL++ for AI applications.Timescale Vector enables you to efficiently store and query millions of vector embeddings in PostgreSQL.Enhances pgvector with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm.Enables fast time-based vector search via automatic time-based partitioning and indexing.Provides a familiar SQL interface for querying vector embeddings and relational data.Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverTimescale Vector (Postgres) self-queryingOn this pageTimescale Vector (Postgres) self-queryingTimescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL.This notebook shows how to use the Postgres vector database (TimescaleVector) to perform self-querying. In the notebook we'll demo the SelfQueryRetriever wrapped around a TimescaleVector vector store. What is Timescale Vector?‚ÄãTimescale Vector is PostgreSQL++ for AI applications.Timescale Vector enables you to efficiently store and query millions of vector embeddings in PostgreSQL.Enhances pgvector with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm.Enables fast time-based vector search via automatic time-based partitioning and indexing.Provides a familiar SQL interface for querying vector embeddings and relational data.Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series |
2,194 | metadata, vector embeddings, and time-series data in a single database.Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.Enables a worry-free experience with enterprise-grade security and compliance.How to access Timescale Vector‚ÄãTimescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)LangChain users get a 90-day free trial for Timescale Vector.To get started, signup to Timescale, create a new database and follow this notebook!See the Timescale Vector explainer blog for more details and performance benchmarks.See the installation instructions for more details on using Timescale Vector in python.Creating a TimescaleVector vectorstore‚ÄãFirst we'll want to create a Timescale Vector vectorstore and seed it with some data. We've created a small demo set of documents that contain summaries of movies.NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the timescale-vector package.#!pip install lark#!pip install timescale-vectorIn this example, we'll use OpenAIEmbeddings, so let's load your OpenAI API key.# Get openAI api key by reading local .env file# The .env file should contain a line starting with `OPENAI_API_KEY=sk-`import osfrom dotenv import load_dotenv, find_dotenv_ = load_dotenv(find_dotenv())OPENAI_API_KEY = os.environ['OPENAI_API_KEY']# Alternatively, use getpass to enter the key in a prompt#import os#import getpass#os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")To connect to your PostgreSQL database, you'll need your service URI, which can be found in the cheatsheet or .env file you downloaded after creating a new database. If you haven't already, signup for Timescale, and create a new database.The URI will look something like this: | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: metadata, vector embeddings, and time-series data in a single database.Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.Enables a worry-free experience with enterprise-grade security and compliance.How to access Timescale Vector‚ÄãTimescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)LangChain users get a 90-day free trial for Timescale Vector.To get started, signup to Timescale, create a new database and follow this notebook!See the Timescale Vector explainer blog for more details and performance benchmarks.See the installation instructions for more details on using Timescale Vector in python.Creating a TimescaleVector vectorstore‚ÄãFirst we'll want to create a Timescale Vector vectorstore and seed it with some data. We've created a small demo set of documents that contain summaries of movies.NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the timescale-vector package.#!pip install lark#!pip install timescale-vectorIn this example, we'll use OpenAIEmbeddings, so let's load your OpenAI API key.# Get openAI api key by reading local .env file# The .env file should contain a line starting with `OPENAI_API_KEY=sk-`import osfrom dotenv import load_dotenv, find_dotenv_ = load_dotenv(find_dotenv())OPENAI_API_KEY = os.environ['OPENAI_API_KEY']# Alternatively, use getpass to enter the key in a prompt#import os#import getpass#os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")To connect to your PostgreSQL database, you'll need your service URI, which can be found in the cheatsheet or .env file you downloaded after creating a new database. If you haven't already, signup for Timescale, and create a new database.The URI will look something like this: |
2,195 | database.The URI will look something like this: postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require# Get the service url by reading local .env file# The .env file should contain a line starting with `TIMESCALE_SERVICE_URL=postgresql://`_ = load_dotenv(find_dotenv())TIMESCALE_SERVICE_URL = os.environ["TIMESCALE_SERVICE_URL"]# Alternatively, use getpass to enter the key in a prompt#import os#import getpass#TIMESCALE_SERVICE_URL = getpass.getpass("Timescale Service URL:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores.timescalevector import TimescaleVectorembeddings = OpenAIEmbeddings()Here's the sample documents we'll use for this demo. The data is about movies, and has both content and metadata fields with information about particular movie.docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: database.The URI will look something like this: postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require# Get the service url by reading local .env file# The .env file should contain a line starting with `TIMESCALE_SERVICE_URL=postgresql://`_ = load_dotenv(find_dotenv())TIMESCALE_SERVICE_URL = os.environ["TIMESCALE_SERVICE_URL"]# Alternatively, use getpass to enter the key in a prompt#import os#import getpass#TIMESCALE_SERVICE_URL = getpass.getpass("Timescale Service URL:")from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores.timescalevector import TimescaleVectorembeddings = OpenAIEmbeddings()Here's the sample documents we'll use for this demo. The data is about movies, and has both content and metadata fields with information about particular movie.docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, |
2,196 | 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]Finally, we'll create our Timescale Vector vectorstore. Note that the collection name will be the name of the PostgreSQL table in which the documents are stored in.COLLECTION_NAME = "langchain_self_query_demo"vectorstore = TimescaleVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, service_url=TIMESCALE_SERVICE_URL,)Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfo# Give LLM info about the metadata fieldsmetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"# Instantiate the self-query retriever from an LLMllm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Self Querying Retrieval with Timescale Vector‚ÄãAnd now we can try actually using our retriever!Run the queries below and note how you can specify a query, filter, composite filter (filters with AND, OR) in natural language and the self-query retriever will translate | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]Finally, we'll create our Timescale Vector vectorstore. Note that the collection name will be the name of the PostgreSQL table in which the documents are stored in.COLLECTION_NAME = "langchain_self_query_demo"vectorstore = TimescaleVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, service_url=TIMESCALE_SERVICE_URL,)Creating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfo# Give LLM info about the metadata fieldsmetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"# Instantiate the self-query retriever from an LLMllm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Self Querying Retrieval with Timescale Vector‚ÄãAnd now we can try actually using our retriever!Run the queries below and note how you can specify a query, filter, composite filter (filters with AND, OR) in natural language and the self-query retriever will translate |
2,197 | and the self-query retriever will translate that query into SQL and perform the search on the Timescale Vector (Postgres) vectorstore.This illustrates the power of the self-query retriever. You can use it to perform complex searches over your vectorstore without you or your users having to write any SQL directly!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /Users/avtharsewrathan/sideprojects2023/timescaleai/tsv-langchain/langchain/libs/langchain/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: and the self-query retriever will translate that query into SQL and perform the search on the Timescale Vector (Postgres) vectorstore.This illustrates the power of the self-query retriever. You can use it to perform complex searches over your vectorstore without you or your users having to write any SQL directly!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") /Users/avtharsewrathan/sideprojects2023/timescaleai/tsv-langchain/langchain/libs/langchain/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='dinosaur' filter=None limit=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within |
2,198 | detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'}), Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'}), Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' |
2,199 | and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example specifies a query with a LIMIT valueretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7})]PreviousSupabaseNextVectaraWhat is Timescale Vector?How to access Timescale VectorCreating a TimescaleVector vectorstoreCreating our self-querying retrieverSelf Querying Retrieval with Timescale VectorFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. | Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL. ->: and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k​We can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example specifies a query with a LIMIT valueretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None limit=2 [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7})]PreviousSupabaseNextVectaraWhat is Timescale Vector?How to access Timescale VectorCreating a TimescaleVector vectorstoreCreating our self-querying retrieverSelf Querying Retrieval with Timescale VectorFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
Subsets and Splits