Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
1,900 | Using a Retriever | ü¶úÔ∏èüîó Langchain | This example showcases question answering over an index. | This example showcases question answering over an index. ->: Using a Retriever | ü¶úÔ∏èüîó Langchain |
1,901 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Using a RetrieverOn this pageUsing a RetrieverThis example showcases question answering over an index.from langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.llms import OpenAIfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromaloader = TextLoader("../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."Chain Type‚ÄãYou can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed | This example showcases question answering over an index. | This example showcases question answering over an index. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Using a RetrieverOn this pageUsing a RetrieverThis example showcases question answering over an index.from langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.llms import OpenAIfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromaloader = TextLoader("../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."Chain Type‚ÄãYou can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed |
1,902 | use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see this notebook.There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce.qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that Judge Ketanji Brown Jackson is one of our nation's top legal minds, a former top litigator in private practice and a former federal public defender, from a family of public school educators and police officers, a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQA chain with the combine_documents_chain parameter. For example:from langchain.chains.question_answering import load_qa_chainqa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and | This example showcases question answering over an index. | This example showcases question answering over an index. ->: use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see this notebook.There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce.qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that Judge Ketanji Brown Jackson is one of our nation's top legal minds, a former top litigator in private practice and a former federal public defender, from a family of public school educators and police officers, a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQA chain with the combine_documents_chain parameter. For example:from langchain.chains.question_answering import load_qa_chainqa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and |
1,903 | to former judges appointed by Democrats and Republicans."Custom Prompts​You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the base question answering chainfrom langchain.prompts import PromptTemplateprompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.{context}Question: {question}Answer in Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])chain_type_kwargs = {"prompt": PROMPT}qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " Il presidente ha detto che Ketanji Brown Jackson è una delle menti legali più importanti del paese, che continuerà l'eccellenza di Justice Breyer e che ha ricevuto un ampio sostegno, da Fraternal Order of Police a ex giudici nominati da democratici e repubblicani."Vectorstore Retriever Options​You can adjust how documents are retrieved from your vectorstore depending on the specific task.There are two main ways to retrieve documents relevant to a query- Similarity Search and Max Marginal Relevance Search (MMR Search). Similarity Search is the default, but you can use MMR by adding the search_type parameter:docsearch.as_retriever(search_type="mmr")You can also modify the search by passing specific search arguments through the retriever to the search function, using the search_kwargs keyword argument.k defines how many documents are returned; defaults to 4.score_threshold allows you to set a minimum relevance for documents returned by the retriever, if you are using the "similarity_score_threshold" search type.fetch_k determines the amount of documents to pass to the MMR algorithm; defaults to 20.lambda_mult controls the | This example showcases question answering over an index. | This example showcases question answering over an index. ->: to former judges appointed by Democrats and Republicans."Custom Prompts​You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the base question answering chainfrom langchain.prompts import PromptTemplateprompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.{context}Question: {question}Answer in Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])chain_type_kwargs = {"prompt": PROMPT}qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " Il presidente ha detto che Ketanji Brown Jackson è una delle menti legali più importanti del paese, che continuerà l'eccellenza di Justice Breyer e che ha ricevuto un ampio sostegno, da Fraternal Order of Police a ex giudici nominati da democratici e repubblicani."Vectorstore Retriever Options​You can adjust how documents are retrieved from your vectorstore depending on the specific task.There are two main ways to retrieve documents relevant to a query- Similarity Search and Max Marginal Relevance Search (MMR Search). Similarity Search is the default, but you can use MMR by adding the search_type parameter:docsearch.as_retriever(search_type="mmr")You can also modify the search by passing specific search arguments through the retriever to the search function, using the search_kwargs keyword argument.k defines how many documents are returned; defaults to 4.score_threshold allows you to set a minimum relevance for documents returned by the retriever, if you are using the "similarity_score_threshold" search type.fetch_k determines the amount of documents to pass to the MMR algorithm; defaults to 20.lambda_mult controls the |
1,904 | defaults to 20.lambda_mult controls the diversity of results returned by the MMR algorithm, with 1 being minimum diversity and 0 being maximum. Defaults to 0.5.filter allows you to define a filter on what documents should be retrieved, based on the documents' metadata. This has no effect if the Vectorstore doesn't store any metadata.Some examples for how these parameters can be used:# Retrieve more documents with higher diversity- useful if your dataset has many similar documentsdocsearch.as_retriever(search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25})# Fetch more documents for the MMR algorithm to consider, but only return the top 5docsearch.as_retriever(search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50})# Only retrieve documents that have a relevance score above a certain thresholddocsearch.as_retriever(search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8})# Only get the single most similar document from the datasetdocsearch.as_retriever(search_kwargs={'k': 1})# Use a filter to only retrieve documents from a specific paperdocsearch.as_retriever(search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}})Return Source Documents‚ÄãAdditionally, we can return the source documents used to answer the question by specifying an optional parameter when constructing the chain.qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(search_type="mmr", search_kwargs={'fetch_k': 30}), return_source_documents=True)query = "What did the president say about Ketanji Brown Jackson"result = qa({"query": query})result["result"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and a former federal public defender from a family of public school educators and police officers, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and | This example showcases question answering over an index. | This example showcases question answering over an index. ->: defaults to 20.lambda_mult controls the diversity of results returned by the MMR algorithm, with 1 being minimum diversity and 0 being maximum. Defaults to 0.5.filter allows you to define a filter on what documents should be retrieved, based on the documents' metadata. This has no effect if the Vectorstore doesn't store any metadata.Some examples for how these parameters can be used:# Retrieve more documents with higher diversity- useful if your dataset has many similar documentsdocsearch.as_retriever(search_type="mmr", search_kwargs={'k': 6, 'lambda_mult': 0.25})# Fetch more documents for the MMR algorithm to consider, but only return the top 5docsearch.as_retriever(search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 50})# Only retrieve documents that have a relevance score above a certain thresholddocsearch.as_retriever(search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8})# Only get the single most similar document from the datasetdocsearch.as_retriever(search_kwargs={'k': 1})# Use a filter to only retrieve documents from a specific paperdocsearch.as_retriever(search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}})Return Source Documents‚ÄãAdditionally, we can return the source documents used to answer the question by specifying an optional parameter when constructing the chain.qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(search_type="mmr", search_kwargs={'fetch_k': 30}), return_source_documents=True)query = "What did the president say about Ketanji Brown Jackson"result = qa({"query": query})result["result"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and a former federal public defender from a family of public school educators and police officers, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and |
1,905 | to former judges appointed by Democrats and Republicans."result["source_documents"] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', | This example showcases question answering over an index. | This example showcases question answering over an index. ->: to former judges appointed by Democrats and Republicans."result["source_documents"] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', |
1,906 | and secure their own borders.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic | This example showcases question answering over an index. | This example showcases question answering over an index. ->: and secure their own borders.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic |
1,907 | increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]Alternatively, if our document have a "source" metadata key, we can use the RetrievalQAWithSourcesChain to cite our sources:docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))])from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.llms import OpenAIchain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '31-pl'}PreviousRAG over codeNextRemembering chat historyChain TypeCustom PromptsVectorstore Retriever OptionsReturn Source DocumentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This example showcases question answering over an index. | This example showcases question answering over an index. ->: increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]Alternatively, if our document have a "source" metadata key, we can use the RetrievalQAWithSourcesChain to cite our sources:docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))])from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.llms import OpenAIchain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '31-pl'}PreviousRAG over codeNextRemembering chat historyChain TypeCustom PromptsVectorstore Retriever OptionsReturn Source DocumentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,908 | StarRocks | ü¶úÔ∏èüîó Langchain | StarRocks is a High-Performance Analytical Database. | StarRocks is a High-Performance Analytical Database. ->: StarRocks | ü¶úÔ∏èüîó Langchain |
1,909 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesStarRocksOn this pageStarRocksStarRocks is a High-Performance Analytical Database. | StarRocks is a High-Performance Analytical Database. | StarRocks is a High-Performance Analytical Database. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesStarRocksOn this pageStarRocksStarRocks is a High-Performance Analytical Database. |
1,910 | StarRocks is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query.Usually StarRocks is categorized into OLAP, and it has showed excellent performance in ClickBench — a Benchmark For Analytical DBMS. Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.Here we'll show how to use the StarRocks Vector Store.Setup​#!pip install pymysqlSet update_vectordb = False at the beginning. If there is no docs updated, then we don't need to rebuild the embeddings of docsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import StarRocksfrom langchain.vectorstores.starrocks import StarRocksSettingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitter, TokenTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import VectorDBQAfrom langchain.document_loaders import DirectoryLoaderfrom langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoader, UnstructuredMarkdownLoaderupdate_vectordb = False /Users/dirlt/utils/py3env/lib/python3.9/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.7) or chardet (5.1.0)/charset_normalizer (2.0.9) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "Load docs and split them into tokens​Load all markdown files under the docs directoryfor starrocks documents, you can clone repo from https://github.com/StarRocks/starrocks, and there is docs directory in it.loader = DirectoryLoader( "./docs", glob="**/*.md", loader_cls=UnstructuredMarkdownLoader)documents = loader.load()Split docs into tokens, and set update_vectordb = True because there are new docs/tokens.# load text splitter and split docs into snippets of texttext_splitter = TokenTextSplitter(chunk_size=400, | StarRocks is a High-Performance Analytical Database. | StarRocks is a High-Performance Analytical Database. ->: StarRocks is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query.Usually StarRocks is categorized into OLAP, and it has showed excellent performance in ClickBench — a Benchmark For Analytical DBMS. Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.Here we'll show how to use the StarRocks Vector Store.Setup​#!pip install pymysqlSet update_vectordb = False at the beginning. If there is no docs updated, then we don't need to rebuild the embeddings of docsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import StarRocksfrom langchain.vectorstores.starrocks import StarRocksSettingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitter, TokenTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import VectorDBQAfrom langchain.document_loaders import DirectoryLoaderfrom langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoader, UnstructuredMarkdownLoaderupdate_vectordb = False /Users/dirlt/utils/py3env/lib/python3.9/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.7) or chardet (5.1.0)/charset_normalizer (2.0.9) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "Load docs and split them into tokens​Load all markdown files under the docs directoryfor starrocks documents, you can clone repo from https://github.com/StarRocks/starrocks, and there is docs directory in it.loader = DirectoryLoader( "./docs", glob="**/*.md", loader_cls=UnstructuredMarkdownLoader)documents = loader.load()Split docs into tokens, and set update_vectordb = True because there are new docs/tokens.# load text splitter and split docs into snippets of texttext_splitter = TokenTextSplitter(chunk_size=400, |
1,911 | = TokenTextSplitter(chunk_size=400, chunk_overlap=50)split_docs = text_splitter.split_documents(documents)# tell vectordb to update text embeddingsupdate_vectordb = Truesplit_docs[-20] Document(page_content='Compile StarRocks with Docker\n\nThis topic describes how to compile StarRocks using Docker.\n\nOverview\n\nStarRocks provides development environment images for both Ubuntu 22.04 and CentOS 7.9. With the image, you can launch a Docker container and compile StarRocks in the container.\n\nStarRocks version and DEV ENV image\n\nDifferent branches of StarRocks correspond to different development environment images provided on StarRocks Docker Hub.\n\nFor Ubuntu 22.04:\n\n| Branch name | Image name |\n | --------------- | ----------------------------------- |\n | main | starrocks/dev-env-ubuntu:latest |\n | branch-3.0 | starrocks/dev-env-ubuntu:3.0-latest |\n | branch-2.5 | starrocks/dev-env-ubuntu:2.5-latest |\n\nFor CentOS 7.9:\n\n| Branch name | Image name |\n | --------------- | ------------------------------------ |\n | main | starrocks/dev-env-centos7:latest |\n | branch-3.0 | starrocks/dev-env-centos7:3.0-latest |\n | branch-2.5 | starrocks/dev-env-centos7:2.5-latest |\n\nPrerequisites\n\nBefore compiling StarRocks, make sure the following requirements are satisfied:\n\nHardware\n\n', metadata={'source': 'docs/developers/build-starrocks/Build_in_docker.md'})print("# docs = %d, # splits = %d" % (len(documents), len(split_docs))) # docs = 657, # splits = 2802Create vectordb instance‚ÄãUse StarRocks as vectordb‚Äãdef gen_starrocks(update_vectordb, embeddings, settings): if update_vectordb: docsearch = StarRocks.from_documents(split_docs, embeddings, config=settings) else: docsearch = StarRocks(embeddings, settings) return docsearchConvert tokens into embeddings and put them into vectordb‚ÄãHere we use StarRocks as vectordb, you can | StarRocks is a High-Performance Analytical Database. | StarRocks is a High-Performance Analytical Database. ->: = TokenTextSplitter(chunk_size=400, chunk_overlap=50)split_docs = text_splitter.split_documents(documents)# tell vectordb to update text embeddingsupdate_vectordb = Truesplit_docs[-20] Document(page_content='Compile StarRocks with Docker\n\nThis topic describes how to compile StarRocks using Docker.\n\nOverview\n\nStarRocks provides development environment images for both Ubuntu 22.04 and CentOS 7.9. With the image, you can launch a Docker container and compile StarRocks in the container.\n\nStarRocks version and DEV ENV image\n\nDifferent branches of StarRocks correspond to different development environment images provided on StarRocks Docker Hub.\n\nFor Ubuntu 22.04:\n\n| Branch name | Image name |\n | --------------- | ----------------------------------- |\n | main | starrocks/dev-env-ubuntu:latest |\n | branch-3.0 | starrocks/dev-env-ubuntu:3.0-latest |\n | branch-2.5 | starrocks/dev-env-ubuntu:2.5-latest |\n\nFor CentOS 7.9:\n\n| Branch name | Image name |\n | --------------- | ------------------------------------ |\n | main | starrocks/dev-env-centos7:latest |\n | branch-3.0 | starrocks/dev-env-centos7:3.0-latest |\n | branch-2.5 | starrocks/dev-env-centos7:2.5-latest |\n\nPrerequisites\n\nBefore compiling StarRocks, make sure the following requirements are satisfied:\n\nHardware\n\n', metadata={'source': 'docs/developers/build-starrocks/Build_in_docker.md'})print("# docs = %d, # splits = %d" % (len(documents), len(split_docs))) # docs = 657, # splits = 2802Create vectordb instance‚ÄãUse StarRocks as vectordb‚Äãdef gen_starrocks(update_vectordb, embeddings, settings): if update_vectordb: docsearch = StarRocks.from_documents(split_docs, embeddings, config=settings) else: docsearch = StarRocks(embeddings, settings) return docsearchConvert tokens into embeddings and put them into vectordb‚ÄãHere we use StarRocks as vectordb, you can |
1,912 | we use StarRocks as vectordb, you can configure StarRocks instance via StarRocksSettings.Configuring StarRocks instance is pretty much like configuring mysql instance. You need to specify:host/portusername(default: 'root')password(default: '')database(default: 'default')table(default: 'langchain')embeddings = OpenAIEmbeddings()# configure starrocks settings(host/port/user/pw/db)settings = StarRocksSettings()settings.port = 41003settings.host = "127.0.0.1"settings.username = "root"settings.password = ""settings.database = "zya"docsearch = gen_starrocks(update_vectordb, embeddings, settings)print(docsearch)update_vectordb = False Inserting data...: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 2802/2802 [02:26<00:00, 19.11it/s] zya.langchain @ 127.0.0.1:41003 username: root Table Schema: ---------------------------------------------------------------------------- |name |type |key | ---------------------------------------------------------------------------- |id |varchar(65533) |true | |document |varchar(65533) |false | |embedding |array<float> |false | |metadata |varchar(65533) |false | ---------------------------------------------------------------------------- Build QA and ask question to it‚Äãllm = OpenAI()qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())query = "is profile enabled by default? if not, how to enable profile?"resp = | StarRocks is a High-Performance Analytical Database. | StarRocks is a High-Performance Analytical Database. ->: we use StarRocks as vectordb, you can configure StarRocks instance via StarRocksSettings.Configuring StarRocks instance is pretty much like configuring mysql instance. You need to specify:host/portusername(default: 'root')password(default: '')database(default: 'default')table(default: 'langchain')embeddings = OpenAIEmbeddings()# configure starrocks settings(host/port/user/pw/db)settings = StarRocksSettings()settings.port = 41003settings.host = "127.0.0.1"settings.username = "root"settings.password = ""settings.database = "zya"docsearch = gen_starrocks(update_vectordb, embeddings, settings)print(docsearch)update_vectordb = False Inserting data...: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 2802/2802 [02:26<00:00, 19.11it/s] zya.langchain @ 127.0.0.1:41003 username: root Table Schema: ---------------------------------------------------------------------------- |name |type |key | ---------------------------------------------------------------------------- |id |varchar(65533) |true | |document |varchar(65533) |false | |embedding |array<float> |false | |metadata |varchar(65533) |false | ---------------------------------------------------------------------------- Build QA and ask question to it‚Äãllm = OpenAI()qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())query = "is profile enabled by default? if not, how to enable profile?"resp = |
1,913 | by default? if not, how to enable profile?"resp = qa.run(query)print(resp) No, profile is not enabled by default. To enable profile, set the variable `enable_profile` to `true` using the command `set enable_profile = true;`Previoussqlite-vssNextSupabase (Postgres)SetupLoad docs and split them into tokensCreate vectordb instanceUse StarRocks as vectordbConvert tokens into embeddings and put them into vectordbBuild QA and ask question to itCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | StarRocks is a High-Performance Analytical Database. | StarRocks is a High-Performance Analytical Database. ->: by default? if not, how to enable profile?"resp = qa.run(query)print(resp) No, profile is not enabled by default. To enable profile, set the variable `enable_profile` to `true` using the command `set enable_profile = true;`Previoussqlite-vssNextSupabase (Postgres)SetupLoad docs and split them into tokensCreate vectordb instanceUse StarRocks as vectordbConvert tokens into embeddings and put them into vectordbBuild QA and ask question to itCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,914 | AwaDB | ü¶úÔ∏èüîó Langchain | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ->: AwaDB | ü¶úÔ∏èüîó Langchain |
1,915 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.This notebook shows how to use functionality related to the AwaDB.pip install awadbfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import AwaDBfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)docs = text_splitter.split_documents(documents)db = AwaDB.from_documents(docs)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation‚Äôs top legal minds, who will continue Justice Breyer‚Äôs legacy of excellence.Similarity search with score‚ÄãThe returned distance score is between 0-1. 0 is dissimilar, 1 is the most | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.This notebook shows how to use functionality related to the AwaDB.pip install awadbfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import AwaDBfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)docs = text_splitter.split_documents(documents)db = AwaDB.from_documents(docs)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation‚Äôs top legal minds, who will continue Justice Breyer‚Äôs legacy of excellence.Similarity search with score‚ÄãThe returned distance score is between 0-1. 0 is dissimilar, 1 is the most |
1,916 | is between 0-1. 0 is dissimilar, 1 is the most similardocs = db.similarity_search_with_score(query)print(docs[0]) (Document(page_content='And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'}), 0.561813814013747)Restore the table created and added data before​AwaDB automatically persists added document dataIf you can restore the table you created and added before, you can just do this as below:awadb_client = awadb.Client()ret = awadb_client.Load("langchain_awadb")if ret: print("awadb load table success")else: print("awadb load table failed")awadb load table successPreviousAtlasNextAzure Cosmos DBSimilarity search with scoreRestore the table created and added data beforeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. | AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ->: is between 0-1. 0 is dissimilar, 1 is the most similardocs = db.similarity_search_with_score(query)print(docs[0]) (Document(page_content='And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'}), 0.561813814013747)Restore the table created and added data before​AwaDB automatically persists added document dataIf you can restore the table you created and added before, you can just do this as below:awadb_client = awadb.Client()ret = awadb_client.Load("langchain_awadb")if ret: print("awadb load table success")else: print("awadb load table failed")awadb load table successPreviousAtlasNextAzure Cosmos DBSimilarity search with scoreRestore the table created and added data beforeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,917 | Neo4j Vector Index | ü¶úÔ∏èüîó Langchain | Neo4j is an open-source graph database with integrated support for vector similarity search | Neo4j is an open-source graph database with integrated support for vector similarity search ->: Neo4j Vector Index | ü¶úÔ∏èüîó Langchain |
1,918 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesNeo4j Vector IndexOn this pageNeo4j Vector IndexNeo4j is an open-source graph database with integrated support for vector similarity searchIt supports:approximate nearest neighbor searchEuclidean similarity and cosine similarityHybrid search combining vector and keyword searchesThis notebook shows how to use the Neo4j vector index (Neo4jVector).See the installation instruction.# Pip install necessary packagepip install neo4jpip install openaipip install tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Neo4jVectorfrom langchain.document_loaders import TextLoaderfrom langchain.docstore.document import Documentloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = | Neo4j is an open-source graph database with integrated support for vector similarity search | Neo4j is an open-source graph database with integrated support for vector similarity search ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesNeo4j Vector IndexOn this pageNeo4j Vector IndexNeo4j is an open-source graph database with integrated support for vector similarity searchIt supports:approximate nearest neighbor searchEuclidean similarity and cosine similarityHybrid search combining vector and keyword searchesThis notebook shows how to use the Neo4j vector index (Neo4jVector).See the installation instruction.# Pip install necessary packagepip install neo4jpip install openaipip install tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Neo4jVectorfrom langchain.document_loaders import TextLoaderfrom langchain.docstore.document import Documentloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = |
1,919 | = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()# Neo4jVector requires the Neo4j database credentialsurl = "bolt://localhost:7687"username = "neo4j"password = "pleaseletmein"# You can also use environment variables instead of directly passing named parameters#os.environ["NEO4J_URI"] = "bolt://localhost:7687"#os.environ["NEO4J_USERNAME"] = "neo4j"#os.environ["NEO4J_PASSWORD"] = "pleaseletmein"Similarity Search with Cosine Distance (Default)​# The Neo4jVector Module will connect to Neo4j and create a vector index if needed.db = Neo4jVector.from_documents( docs, OpenAIEmbeddings(), url=url, username=username, password=password) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query, k=2)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.9099836349487305 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of | Neo4j is an open-source graph database with integrated support for vector similarity search | Neo4j is an open-source graph database with integrated support for vector similarity search ->: = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()# Neo4jVector requires the Neo4j database credentialsurl = "bolt://localhost:7687"username = "neo4j"password = "pleaseletmein"# You can also use environment variables instead of directly passing named parameters#os.environ["NEO4J_URI"] = "bolt://localhost:7687"#os.environ["NEO4J_USERNAME"] = "neo4j"#os.environ["NEO4J_PASSWORD"] = "pleaseletmein"Similarity Search with Cosine Distance (Default)​# The Neo4jVector Module will connect to Neo4j and create a vector index if needed.db = Neo4jVector.from_documents( docs, OpenAIEmbeddings(), url=url, username=username, password=password) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query, k=2)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.9099836349487305 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of |
1,920 | 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.9099686145782471 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. --------------------------------------------------------------------------------Working with vectorstore​Above, we created a vectorstore from scratch. However, often times we want to work with an existing vectorstore. | Neo4j is an open-source graph database with integrated support for vector similarity search | Neo4j is an open-source graph database with integrated support for vector similarity search ->: 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.9099686145782471 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. --------------------------------------------------------------------------------Working with vectorstore​Above, we created a vectorstore from scratch. However, often times we want to work with an existing vectorstore. |
1,921 | In order to do that, we can initialize it directly.index_name = "vector" # default index namestore = Neo4jVector.from_existing_index( OpenAIEmbeddings(), url=url, username=username, password=password, index_name=index_name,) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()We can also initialize a vectorstore from existing graph using the from_existing_graph method. This method pulls relevant text information from the database, and calculates and stores the text embeddings back to the database.# First we create sample data in graphstore.query( "CREATE (p:Person {name: 'Tomaz', location:'Slovenia', hobby:'Bicycle'})") []# Now we initialize from existing graphexisting_graph = Neo4jVector.from_existing_graph( embedding=OpenAIEmbeddings(), url=url, username=username, password=password, index_name="person_index", node_label="Person", text_node_properties=["name", "location"], embedding_node_property="embedding", )result = existing_graph.similarity_search("Slovenia", k = 1) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()result[0] Document(page_content='\nname: Tomaz\nlocation: Slovenia', metadata={'hobby': 'Bicycle'})Add documents‚ÄãWe can add documents to the existing vectorstore.store.add_documents([Document(page_content="foo")]) ['187fc53a-5dde-11ee-ad78-1f6b05bf8513']docs_with_score = store.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='foo', metadata={}), 1.0)Hybrid search (vector + keyword)‚ÄãNeo4j integrates both vector and keyword indexes, which allows you to use a hybrid search approach# The Neo4jVector Module will connect to Neo4j and create a vector and | Neo4j is an open-source graph database with integrated support for vector similarity search | Neo4j is an open-source graph database with integrated support for vector similarity search ->: In order to do that, we can initialize it directly.index_name = "vector" # default index namestore = Neo4jVector.from_existing_index( OpenAIEmbeddings(), url=url, username=username, password=password, index_name=index_name,) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()We can also initialize a vectorstore from existing graph using the from_existing_graph method. This method pulls relevant text information from the database, and calculates and stores the text embeddings back to the database.# First we create sample data in graphstore.query( "CREATE (p:Person {name: 'Tomaz', location:'Slovenia', hobby:'Bicycle'})") []# Now we initialize from existing graphexisting_graph = Neo4jVector.from_existing_graph( embedding=OpenAIEmbeddings(), url=url, username=username, password=password, index_name="person_index", node_label="Person", text_node_properties=["name", "location"], embedding_node_property="embedding", )result = existing_graph.similarity_search("Slovenia", k = 1) /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()result[0] Document(page_content='\nname: Tomaz\nlocation: Slovenia', metadata={'hobby': 'Bicycle'})Add documents‚ÄãWe can add documents to the existing vectorstore.store.add_documents([Document(page_content="foo")]) ['187fc53a-5dde-11ee-ad78-1f6b05bf8513']docs_with_score = store.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='foo', metadata={}), 1.0)Hybrid search (vector + keyword)‚ÄãNeo4j integrates both vector and keyword indexes, which allows you to use a hybrid search approach# The Neo4jVector Module will connect to Neo4j and create a vector and |
1,922 | will connect to Neo4j and create a vector and keyword indices if needed.hybrid_db = Neo4jVector.from_documents( docs, OpenAIEmbeddings(), url=url, username=username, password=password, search_type="hybrid") /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()To load the hybrid search from existing indexes, you have to provide both the vector and keyword indicesindex_name = "vector" # default index namekeyword_index_name = "keyword" #default keyword index namestore = Neo4jVector.from_existing_index( OpenAIEmbeddings(), url=url, username=username, password=password, index_name=index_name, keyword_index_name=keyword_index_name, search_type="hybrid") /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()Retriever options​This section shows how to use Neo4jVector as a retriever.retriever = store.as_retriever()retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s | Neo4j is an open-source graph database with integrated support for vector similarity search | Neo4j is an open-source graph database with integrated support for vector similarity search ->: will connect to Neo4j and create a vector and keyword indices if needed.hybrid_db = Neo4jVector.from_documents( docs, OpenAIEmbeddings(), url=url, username=username, password=password, search_type="hybrid") /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()To load the hybrid search from existing indexes, you have to provide both the vector and keyword indicesindex_name = "vector" # default index namekeyword_index_name = "keyword" #default keyword index namestore = Neo4jVector.from_existing_index( OpenAIEmbeddings(), url=url, username=username, password=password, index_name=index_name, keyword_index_name=keyword_index_name, search_type="hybrid") /home/tomaz/neo4j/langchain/libs/langchain/langchain/vectorstores/neo4j_vector.py:165: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()Retriever options​This section shows how to use Neo4jVector as a retriever.retriever = store.as_retriever()retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s |
1,923 | legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'})Question Answering with Sources​This section goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index. from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.chat_models import ChatOpenAIchain = RetrievalQAWithSourcesChain.from_chain_type( ChatOpenAI(temperature=0), chain_type="stuff", retriever=retriever)chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': "The president honored Justice Stephen Breyer, who is retiring from the United States Supreme Court. He thanked him for his service and mentioned that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy of excellence. \n", 'sources': '../../modules/state_of_the_union.txt'}PreviousMyScaleNextNucliaDBSimilarity Search with Cosine Distance (Default)Working with vectorstoreAdd documentsHybrid search (vector + keyword)Retriever optionsQuestion Answering with SourcesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Neo4j is an open-source graph database with integrated support for vector similarity search | Neo4j is an open-source graph database with integrated support for vector similarity search ->: legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'})Question Answering with Sources​This section goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index. from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.chat_models import ChatOpenAIchain = RetrievalQAWithSourcesChain.from_chain_type( ChatOpenAI(temperature=0), chain_type="stuff", retriever=retriever)chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': "The president honored Justice Stephen Breyer, who is retiring from the United States Supreme Court. He thanked him for his service and mentioned that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy of excellence. \n", 'sources': '../../modules/state_of_the_union.txt'}PreviousMyScaleNextNucliaDBSimilarity Search with Cosine Distance (Default)Working with vectorstoreAdd documentsHybrid search (vector + keyword)Retriever optionsQuestion Answering with SourcesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,924 | Activeloop Deep Lake | ü¶úÔ∏èüîó Langchain | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: Activeloop Deep Lake | ü¶úÔ∏èüîó Langchain |
1,925 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesActiveloop Deep LakeOn this pageActiveloop Deep LakeActiveloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.This notebook showcases basic functionality related to Activeloop Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a serverless data lake with version control, query engine and streaming dataloaders to deep learning frameworks. For more information, please see the Deep Lake documentation or api referenceSetting up‚Äãpip install openai 'deeplake[enterprise]' tiktokenExample provided by Activeloop‚ÄãIntegration with LangChain.Deep Lake locally‚Äãfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DeepLakeimport osimport | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesActiveloop Deep LakeOn this pageActiveloop Deep LakeActiveloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.This notebook showcases basic functionality related to Activeloop Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a serverless data lake with version control, query engine and streaming dataloaders to deep learning frameworks. For more information, please see the Deep Lake documentation or api referenceSetting up‚Äãpip install openai 'deeplake[enterprise]' tiktokenExample provided by Activeloop‚ÄãIntegration with LangChain.Deep Lake locally‚Äãfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DeepLakeimport osimport |
1,926 | import DeepLakeimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")activeloop_token = getpass.getpass("activeloop token:")embeddings = OpenAIEmbeddings()from langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Create a local dataset​Create a dataset locally at ./deeplake/, then run similarity search. The Deeplake+LangChain integration uses Deep Lake datasets under the hood, so dataset and vector store are used interchangeably. To create a dataset in your own cloud, or in the Deep Lake storage, adjust the path accordingly.db = DeepLake( dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)db.add_documents(docs)# or shorter# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)Query dataset​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query) Dataset(path='./my_deeplake/', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None To disable dataset summary printings all the time, you can specify verbose=False during VectorStore initialization.print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: import DeepLakeimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")activeloop_token = getpass.getpass("activeloop token:")embeddings = OpenAIEmbeddings()from langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Create a local dataset​Create a dataset locally at ./deeplake/, then run similarity search. The Deeplake+LangChain integration uses Deep Lake datasets under the hood, so dataset and vector store are used interchangeably. To create a dataset in your own cloud, or in the Deep Lake storage, adjust the path accordingly.db = DeepLake( dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)db.add_documents(docs)# or shorter# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)Query dataset​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query) Dataset(path='./my_deeplake/', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None To disable dataset summary printings all the time, you can specify verbose=False during VectorStore initialization.print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this |
1,927 | someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Later, you can reload the dataset without recomputing embeddingsdb = DeepLake( dataset_path="./my_deeplake/", embedding=embeddings, read_only=True)docs = db.similarity_search(query) Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storageDeep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquiring the writer lock.Retrieval Question/Answering​from langchain.chains import RetrievalQAfrom langchain.llms import OpenAIChatqa = RetrievalQA.from_chain_type( llm=OpenAIChat(model="gpt-3.5-turbo"), chain_type="stuff", retriever=db.as_retriever(),) /home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/llms/openai.py:786: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn(query = "What did the president say about Ketanji Brown Jackson"qa.run(query) 'The president said that Ketanji Brown Jackson is a former top litigator in private practice and a former federal public defender. She comes from a family of public school educators and police officers. She is a consensus builder and has received a broad range of support since being nominated.'Attribute based filtering in metadata​Let's create another vector store containing metadata with the year the documents were created.import | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Later, you can reload the dataset without recomputing embeddingsdb = DeepLake( dataset_path="./my_deeplake/", embedding=embeddings, read_only=True)docs = db.similarity_search(query) Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storageDeep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquiring the writer lock.Retrieval Question/Answering​from langchain.chains import RetrievalQAfrom langchain.llms import OpenAIChatqa = RetrievalQA.from_chain_type( llm=OpenAIChat(model="gpt-3.5-turbo"), chain_type="stuff", retriever=db.as_retriever(),) /home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/llms/openai.py:786: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn(query = "What did the president say about Ketanji Brown Jackson"qa.run(query) 'The president said that Ketanji Brown Jackson is a former top litigator in private practice and a former federal public defender. She comes from a family of public school educators and police officers. She is a consensus builder and has received a broad range of support since being nominated.'Attribute based filtering in metadata​Let's create another vector store containing metadata with the year the documents were created.import |
1,928 | with the year the documents were created.import randomfor d in docs: d.metadata["year"] = random.randint(2012, 2014)db = DeepLake.from_documents( docs, embeddings, dataset_path="./my_deeplake/", overwrite=True) Dataset(path='./my_deeplake/', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (4, 1536) float32 None id text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None db.similarity_search( "What did the president say about Ketanji Brown Jackson", filter={"metadata": {"year": 2013}},) 100%|██████████| 4/4 [00:00<00:00, 2936.16it/s] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: with the year the documents were created.import randomfor d in docs: d.metadata["year"] = random.randint(2012, 2014)db = DeepLake.from_documents( docs, embeddings, dataset_path="./my_deeplake/", overwrite=True) Dataset(path='./my_deeplake/', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (4, 1536) float32 None id text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None db.similarity_search( "What did the president say about Ketanji Brown Jackson", filter={"metadata": {"year": 2013}},) 100%|██████████| 4/4 [00:00<00:00, 2936.16it/s] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range |
1,929 | been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013})]Choosing distance function​Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distance, cos for | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013})]Choosing distance function​Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distance, cos for |
1,930 | L1 for Nuclear, Max l-infinity distance, cos for cosine similarity, dot for dot product db.similarity_search( "What did the president say about Ketanji Brown Jackson?", distance_metric="cos") [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: L1 for Nuclear, Max l-infinity distance, cos for cosine similarity, dot for dot product db.similarity_search( "What did the president say about Ketanji Brown Jackson?", distance_metric="cos") [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South |
1,931 | commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for |
1,932 | wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2012})]Maximal Marginal relevance​Using maximal marginal relevancedb.max_marginal_relevance_search( "What did the president say about Ketanji Brown Jackson?") [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2012})]Maximal Marginal relevance​Using maximal marginal relevancedb.max_marginal_relevance_search( "What did the president say about Ketanji Brown Jackson?") [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass |
1,933 | on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2013}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 |
1,934 | we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2012})]Delete dataset​db.delete_dataset() and if delete fails you can also force deleteDeepLake.force_delete_by_path("./my_deeplake") Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory​By default, Deep Lake datasets are stored locally. To store them in memory, in the Deep Lake Managed DB, or in any object storage, you can provide the corresponding path and credentials when creating the vector store. Some paths require registration with Activeloop and creation of an API token that can be retrieved hereos.environ["ACTIVELOOP_TOKEN"] = activeloop_token# Embed and store the textsusername = "<USERNAME_OR_ORG>" # your username on app.activeloop.aidataset_path = f"hub://{username}/langchain_testing_python" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.docs = text_splitter.split_documents(documents)embedding = OpenAIEmbeddings()db = DeepLake(dataset_path=dataset_path, embedding=embeddings, overwrite=True)ids = db.add_documents(docs) Your Deep Lake dataset has been successfully created! Dataset(path='hub://adilkhan/langchain_testing_python', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../modules/state_of_the_union.txt', 'year': 2012})]Delete dataset​db.delete_dataset() and if delete fails you can also force deleteDeepLake.force_delete_by_path("./my_deeplake") Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory​By default, Deep Lake datasets are stored locally. To store them in memory, in the Deep Lake Managed DB, or in any object storage, you can provide the corresponding path and credentials when creating the vector store. Some paths require registration with Activeloop and creation of an API token that can be retrieved hereos.environ["ACTIVELOOP_TOKEN"] = activeloop_token# Embed and store the textsusername = "<USERNAME_OR_ORG>" # your username on app.activeloop.aidataset_path = f"hub://{username}/langchain_testing_python" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.docs = text_splitter.split_documents(documents)embedding = OpenAIEmbeddings()db = DeepLake(dataset_path=dataset_path, embedding=embeddings, overwrite=True)ids = db.add_documents(docs) Your Deep Lake dataset has been successfully created! Dataset(path='hub://adilkhan/langchain_testing_python', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id |
1,935 | (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.tensor_db execution option​In order to utilize Deep Lake's Managed Tensor Database, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps.# Embed and store the textsusername = "<USERNAME_OR_ORG>" # your username on app.activeloop.aidataset_path = f"hub://{username}/langchain_testing"docs = text_splitter.split_documents(documents)embedding = OpenAIEmbeddings()db = DeepLake( | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.tensor_db execution option​In order to utilize Deep Lake's Managed Tensor Database, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps.# Embed and store the textsusername = "<USERNAME_OR_ORG>" # your username on app.activeloop.aidataset_path = f"hub://{username}/langchain_testing"docs = text_splitter.split_documents(documents)embedding = OpenAIEmbeddings()db = DeepLake( |
1,936 | = OpenAIEmbeddings()db = DeepLake( dataset_path=dataset_path, embedding=embeddings, overwrite=True, runtime={"tensor_db": True},)ids = db.add_documents(docs) Your Deep Lake dataset has been successfully created! | Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None TQL Search‚ÄãFurthermore, the execution of queries is also supported within the similarity_search method, whereby the query can be specified utilizing Deep Lake's Tensor Query Language (TQL).search_id = db.vectorstore.dataset.id[0].numpy()search_id[0] '8a6ff326-3a85-11ee-b840-13905694aaaf'docs = db.similarity_search( query=None, tql=f"SELECT * WHERE id == '{search_id[0]}'",)db.vectorstore.summary() Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None Creating vector stores on AWS S3‚Äãdataset_path = f"s3://BUCKET/langchain_test" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.embedding = OpenAIEmbeddings()db = DeepLake.from_documents( docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds={ "aws_access_key_id": os.environ["AWS_ACCESS_KEY_ID"], "aws_secret_access_key": | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: = OpenAIEmbeddings()db = DeepLake( dataset_path=dataset_path, embedding=embeddings, overwrite=True, runtime={"tensor_db": True},)ids = db.add_documents(docs) Your Deep Lake dataset has been successfully created! | Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None TQL Search‚ÄãFurthermore, the execution of queries is also supported within the similarity_search method, whereby the query can be specified utilizing Deep Lake's Tensor Query Language (TQL).search_id = db.vectorstore.dataset.id[0].numpy()search_id[0] '8a6ff326-3a85-11ee-b840-13905694aaaf'docs = db.similarity_search( query=None, tql=f"SELECT * WHERE id == '{search_id[0]}'",)db.vectorstore.summary() Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None Creating vector stores on AWS S3‚Äãdataset_path = f"s3://BUCKET/langchain_test" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.embedding = OpenAIEmbeddings()db = DeepLake.from_documents( docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds={ "aws_access_key_id": os.environ["AWS_ACCESS_KEY_ID"], "aws_secret_access_key": |
1,937 | "aws_secret_access_key": os.environ["AWS_SECRET_ACCESS_KEY"], "aws_session_token": os.environ["AWS_SESSION_TOKEN"], # Optional },) s3://hub-2.0-datasets-n/langchain_test loaded successfully. Evaluating ingest: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:10<00:00 \ Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None Deep Lake API‚Äãyou can access the Deep Lake dataset at db.vectorstore# get structure of the datasetdb.vectorstore.summary() Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None # get embeddings numpy arrayembeds = db.vectorstore.dataset.embedding.numpy()Transfer local dataset to cloud‚ÄãCopy already created dataset to the cloud. You can also transfer from cloud to local.import deeplakeusername = "davitbun" # your username on app.activeloop.aisource = f"hub://{username}/langchain_testing" # could be local, s3, gcs, etc.destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc.deeplake.deepcopy(src=source, dest=destination, overwrite=True) Copying dataset: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 56/56 [00:38<00:00 This dataset can be visualized in Jupyter Notebook by ds.visualize() or at | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: "aws_secret_access_key": os.environ["AWS_SECRET_ACCESS_KEY"], "aws_session_token": os.environ["AWS_SESSION_TOKEN"], # Optional },) s3://hub-2.0-datasets-n/langchain_test loaded successfully. Evaluating ingest: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:10<00:00 \ Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None Deep Lake API‚Äãyou can access the Deep Lake dataset at db.vectorstore# get structure of the datasetdb.vectorstore.summary() Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (42, 1536) float32 None id text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None # get embeddings numpy arrayembeds = db.vectorstore.dataset.embedding.numpy()Transfer local dataset to cloud‚ÄãCopy already created dataset to the cloud. You can also transfer from cloud to local.import deeplakeusername = "davitbun" # your username on app.activeloop.aisource = f"hub://{username}/langchain_testing" # could be local, s3, gcs, etc.destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc.deeplake.deepcopy(src=source, dest=destination, overwrite=True) Copying dataset: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 56/56 [00:38<00:00 This dataset can be visualized in Jupyter Notebook by ds.visualize() or at |
1,938 | in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy Your Deep Lake dataset has been successfully created! The dataset is private so make sure you are logged in! Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])db = DeepLake(dataset_path=destination, embedding=embeddings)db.add_documents(docs) This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy / hub://davitbun/langchain_test_copy loaded successfully. Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None Evaluating ingest: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:31<00:00 - Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (8, 1536) float32 None ids text (8, 1) str None metadata json (8, 1) str None text text (8, 1) str None ['ad42f3fe-e188-11ed-b66d-41c5f7b85421', 'ad42f3ff-e188-11ed-b66d-41c5f7b85421', 'ad42f400-e188-11ed-b66d-41c5f7b85421', 'ad42f401-e188-11ed-b66d-41c5f7b85421']PreviousVector storesNextAlibaba Cloud OpenSearchSetting upExample provided by ActiveloopDeep Lake locallyCreate a local datasetQuery datasetRetrieval | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy Your Deep Lake dataset has been successfully created! The dataset is private so make sure you are logged in! Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])db = DeepLake(dataset_path=destination, embedding=embeddings)db.add_documents(docs) This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy / hub://davitbun/langchain_test_copy loaded successfully. Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None Evaluating ingest: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:31<00:00 - Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (8, 1536) float32 None ids text (8, 1) str None metadata json (8, 1) str None text text (8, 1) str None ['ad42f3fe-e188-11ed-b66d-41c5f7b85421', 'ad42f3ff-e188-11ed-b66d-41c5f7b85421', 'ad42f400-e188-11ed-b66d-41c5f7b85421', 'ad42f401-e188-11ed-b66d-41c5f7b85421']PreviousVector storesNextAlibaba Cloud OpenSearchSetting upExample provided by ActiveloopDeep Lake locallyCreate a local datasetQuery datasetRetrieval |
1,939 | a local datasetQuery datasetRetrieval Question/AnsweringAttribute based filtering in metadataChoosing distance functionMaximal Marginal relevanceDelete datasetDeep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memoryTQL SearchCreating vector stores on AWS S3Deep Lake APITransfer local dataset to cloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. | Activeloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. ->: a local datasetQuery datasetRetrieval Question/AnsweringAttribute based filtering in metadataChoosing distance functionMaximal Marginal relevanceDelete datasetDeep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memoryTQL SearchCreating vector stores on AWS S3Deep Lake APITransfer local dataset to cloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,940 | scikit-learn | ü¶úÔ∏èüîó Langchain | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. ->: scikit-learn | ü¶úÔ∏èüîó Langchain |
1,941 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesscikit-learnOn this pagescikit-learnscikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.This notebook shows how to use the SKLearnVectorStore vector database.# # if you plan to use bson serialization, install also:# %pip install bson# # if you plan to use parquet serialization, install also:%pip install pandas pyarrowTo use OpenAI embeddings, you will need an OpenAI key. You can get one at https://platform.openai.com/account/api-keys or feel free to use any other embeddings.import osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI key:")Basic usage‚ÄãLoad a sample document corpus‚Äãfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesscikit-learnOn this pagescikit-learnscikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.This notebook shows how to use the SKLearnVectorStore vector database.# # if you plan to use bson serialization, install also:# %pip install bson# # if you plan to use parquet serialization, install also:%pip install pandas pyarrowTo use OpenAI embeddings, you will need an OpenAI key. You can get one at https://platform.openai.com/account/api-keys or feel free to use any other embeddings.import osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI key:")Basic usage‚ÄãLoad a sample document corpus‚Äãfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import |
1,942 | langchain.vectorstores import SKLearnVectorStorefrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Create the SKLearnVectorStore, index the document corpus and run a sample query​import tempfilepersist_path = os.path.join(tempfile.gettempdir(), "union.parquet")vector_store = SKLearnVectorStore.from_documents( documents=docs, embedding=embeddings, persist_path=persist_path, # persist_path and serializer are optional serializer="parquet",)query = "What did the president say about Ketanji Brown Jackson"docs = vector_store.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Saving and loading a vector store​vector_store.persist()print("Vector store was persisted to", persist_path) Vector store was persisted to /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquetvector_store2 = SKLearnVectorStore( embedding=embeddings, persist_path=persist_path, serializer="parquet")print("A new instance of vector store was | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. ->: langchain.vectorstores import SKLearnVectorStorefrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Create the SKLearnVectorStore, index the document corpus and run a sample query​import tempfilepersist_path = os.path.join(tempfile.gettempdir(), "union.parquet")vector_store = SKLearnVectorStore.from_documents( documents=docs, embedding=embeddings, persist_path=persist_path, # persist_path and serializer are optional serializer="parquet",)query = "What did the president say about Ketanji Brown Jackson"docs = vector_store.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Saving and loading a vector store​vector_store.persist()print("Vector store was persisted to", persist_path) Vector store was persisted to /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquetvector_store2 = SKLearnVectorStore( embedding=embeddings, persist_path=persist_path, serializer="parquet")print("A new instance of vector store was |
1,943 | new instance of vector store was loaded from", persist_path) A new instance of vector store was loaded from /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquetdocs = vector_store2.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Clean-up​os.remove(persist_path)PreviousSingleStoreDBNextsqlite-vssBasic usageLoad a sample document corpusCreate the SKLearnVectorStore, index the document corpus and run a sample querySaving and loading a vector storeClean-upCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. | scikit-learn is an open-source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. ->: new instance of vector store was loaded from", persist_path) A new instance of vector store was loaded from /var/folders/6r/wc15p6m13nl_nl_n_xfqpc5c0000gp/T/union.parquetdocs = vector_store2.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Clean-up​os.remove(persist_path)PreviousSingleStoreDBNextsqlite-vssBasic usageLoad a sample document corpusCreate the SKLearnVectorStore, index the document corpus and run a sample querySaving and loading a vector storeClean-upCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,944 | Tencent Cloud VectorDB | ü¶úÔ∏èüîó Langchain | Tencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service. | Tencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service. ->: Tencent Cloud VectorDB | ü¶úÔ∏èüîó Langchain |
1,945 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTencent Cloud VectorDBTencent Cloud VectorDBTencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service.This notebook shows how to use functionality related to the Tencent vector database.To run, you should have a Database instance..pip3 install tcvectordbfrom langchain.embeddings.fake import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import | Tencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service. | Tencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTencent Cloud VectorDBTencent Cloud VectorDBTencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service.This notebook shows how to use functionality related to the Tencent vector database.To run, you should have a Database instance..pip3 install tcvectordbfrom langchain.embeddings.fake import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import |
1,946 | langchain.vectorstores import TencentVectorDBfrom langchain.vectorstores.tencentvectordb import ConnectionParamsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = FakeEmbeddings(size=128)conn_params = ConnectionParams(url="http://10.0.X.X", key="eC4bLRy2va******************************", username="root", timeout=20)vector_db = TencentVectorDB.from_documents( docs, embeddings, connection_params=conn_params, # drop_old=True,)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_contentvector_db = TencentVectorDB(embeddings, conn_params)vector_db.add_texts(["Ankush went to Princeton"])query = "Where did Ankush go to college?"docs = vector_db.max_marginal_relevance_search(query)docs[0].page_contentPreviousTairNextTigrisCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Tencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service. | Tencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service. ->: langchain.vectorstores import TencentVectorDBfrom langchain.vectorstores.tencentvectordb import ConnectionParamsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = FakeEmbeddings(size=128)conn_params = ConnectionParams(url="http://10.0.X.X", key="eC4bLRy2va******************************", username="root", timeout=20)vector_db = TencentVectorDB.from_documents( docs, embeddings, connection_params=conn_params, # drop_old=True,)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_contentvector_db = TencentVectorDB(embeddings, conn_params)vector_db.add_texts(["Ankush went to Princeton"])query = "Where did Ankush go to college?"docs = vector_db.max_marginal_relevance_search(query)docs[0].page_contentPreviousTairNextTigrisCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,947 | Redis | 🦜�🔗 Langchain | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: Redis | 🦜�🔗 Langchain |
1,948 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesRedisOn this pageRedisRedis vector database introduction and langchain integration guide.What is Redis?​Most developers from a web services background are probably familiar with Redis. At it's core, Redis is an open-source key-value store that can be used as a cache, message broker, and database. Developers choose Redis because it is fast, has a large ecosystem of client libraries, and has been deployed by major enterprises for years.On top of these traditional use cases, Redis provides additional capabilities like the Search and Query capability that allows users to create secondary index structures within Redis. This allows Redis to be a Vector Database, at the speed of a cache. Redis as a Vector Database​Redis uses compressed, inverted indexes for fast indexing with a low memory footprint. It also supports a number of advanced features such as:Indexing of multiple fields in Redis hashes and JSONVector similarity search (with HNSW (ANN) or FLAT (KNN))Vector Range Search (e.g. find all vectors | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesRedisOn this pageRedisRedis vector database introduction and langchain integration guide.What is Redis?​Most developers from a web services background are probably familiar with Redis. At it's core, Redis is an open-source key-value store that can be used as a cache, message broker, and database. Developers choose Redis because it is fast, has a large ecosystem of client libraries, and has been deployed by major enterprises for years.On top of these traditional use cases, Redis provides additional capabilities like the Search and Query capability that allows users to create secondary index structures within Redis. This allows Redis to be a Vector Database, at the speed of a cache. Redis as a Vector Database​Redis uses compressed, inverted indexes for fast indexing with a low memory footprint. It also supports a number of advanced features such as:Indexing of multiple fields in Redis hashes and JSONVector similarity search (with HNSW (ANN) or FLAT (KNN))Vector Range Search (e.g. find all vectors |
1,949 | (KNN))Vector Range Search (e.g. find all vectors within a radius of a query vector)Incremental indexing without performance lossDocument ranking (using tf-idf, with optional user-provided weights)Field weightingComplex boolean queries with AND, OR, and NOT operatorsPrefix matching, fuzzy matching, and exact-phrase queriesSupport for double-metaphone phonetic matchingAuto-complete suggestions (with fuzzy prefix suggestions)Stemming-based query expansion in many languages (using Snowball)Support for Chinese-language tokenization and querying (using Friso)Numeric filters and rangesGeospatial searches using Redis geospatial indexingA powerful aggregations engineSupports for all utf-8 encoded textRetrieve full documents, selected fields, or only the document IDsSorting results (for example, by creation date)Clients​Since redis is much more than just a vector database, there are often use cases that demand usage of a Redis client besides just the langchain integration. You can use any standard Redis client library to run Search and Query commands, but it's easiest to use a library that wraps the Search and Query API. Below are a few examples, but you can find more client libraries here.ProjectLanguageLicenseAuthorStarsjedisJavaMITRedisredisvlPythonMITRedisredis-pyPythonMITRedisnode-redisNode.jsMITRedisnredisstack.NETMITRedisDeployment Options​There are many ways to deploy Redis with RediSearch. The easiest way to get started is to use Docker, but there are are many potential options for deployment such asRedis CloudDocker (Redis Stack)Cloud marketplaces: AWS Marketplace, Google Marketplace, or Azure MarketplaceOn-premise: Redis Enterprise SoftwareKubernetes: Redis Enterprise Software on KubernetesExamples​Many examples can be found in the Redis AI team's GitHubAwesome Redis AI Resources - List of examples of using Redis in AI workloadsAzure OpenAI Embeddings Q&A - OpenAI and Redis as a Q&A service on Azure.ArXiv Paper Search - Semantic search over arXiv scholarly | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: (KNN))Vector Range Search (e.g. find all vectors within a radius of a query vector)Incremental indexing without performance lossDocument ranking (using tf-idf, with optional user-provided weights)Field weightingComplex boolean queries with AND, OR, and NOT operatorsPrefix matching, fuzzy matching, and exact-phrase queriesSupport for double-metaphone phonetic matchingAuto-complete suggestions (with fuzzy prefix suggestions)Stemming-based query expansion in many languages (using Snowball)Support for Chinese-language tokenization and querying (using Friso)Numeric filters and rangesGeospatial searches using Redis geospatial indexingA powerful aggregations engineSupports for all utf-8 encoded textRetrieve full documents, selected fields, or only the document IDsSorting results (for example, by creation date)Clients​Since redis is much more than just a vector database, there are often use cases that demand usage of a Redis client besides just the langchain integration. You can use any standard Redis client library to run Search and Query commands, but it's easiest to use a library that wraps the Search and Query API. Below are a few examples, but you can find more client libraries here.ProjectLanguageLicenseAuthorStarsjedisJavaMITRedisredisvlPythonMITRedisredis-pyPythonMITRedisnode-redisNode.jsMITRedisnredisstack.NETMITRedisDeployment Options​There are many ways to deploy Redis with RediSearch. The easiest way to get started is to use Docker, but there are are many potential options for deployment such asRedis CloudDocker (Redis Stack)Cloud marketplaces: AWS Marketplace, Google Marketplace, or Azure MarketplaceOn-premise: Redis Enterprise SoftwareKubernetes: Redis Enterprise Software on KubernetesExamples​Many examples can be found in the Redis AI team's GitHubAwesome Redis AI Resources - List of examples of using Redis in AI workloadsAzure OpenAI Embeddings Q&A - OpenAI and Redis as a Q&A service on Azure.ArXiv Paper Search - Semantic search over arXiv scholarly |
1,950 | Search - Semantic search over arXiv scholarly papersVector Search on Azure - Vector search on Azure using Azure Cache for Redis and Azure OpenAIMore Resources​For more information on how to use Redis as a vector database, check out the following resources:RedisVL Documentation - Documentation for the Redis Vector Library ClientRedis Vector Similarity Docs - Redis official docs for Vector Search.Redis-py Search Docs - Documentation for redis-py client libraryVector Similarity Search: From Basics to Production - Introductory blog post to VSS and Redis as a VectorDB.Install Redis Python Client​Redis-py is the officially supported client by Redis. Recently released is the RedisVL client which is purpose-built for the Vector Database use cases. Both can be installed with pip.pip install redis redisvl openai tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()Sample Data​First we will describe some sample data so that the various attributes of the Redis vector store can be demonstrated.metadata = [ { "user": "john", "age": 18, "job": "engineer", "credit_score": "high", }, { "user": "derrick", "age": 45, "job": "doctor", "credit_score": "low", }, { "user": "nancy", "age": 94, "job": "doctor", "credit_score": "high", }, { "user": "tyler", "age": 100, "job": "engineer", "credit_score": "high", }, { "user": "joe", "age": 35, "job": "dentist", "credit_score": "medium", },]texts = ["foo", "foo", "foo", "bar", "bar"]Initializing Redis​To locally deploy Redis, run:docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latestIf things are running correctly you should see a nice Redis UI at http://localhost:8001. | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: Search - Semantic search over arXiv scholarly papersVector Search on Azure - Vector search on Azure using Azure Cache for Redis and Azure OpenAIMore Resources​For more information on how to use Redis as a vector database, check out the following resources:RedisVL Documentation - Documentation for the Redis Vector Library ClientRedis Vector Similarity Docs - Redis official docs for Vector Search.Redis-py Search Docs - Documentation for redis-py client libraryVector Similarity Search: From Basics to Production - Introductory blog post to VSS and Redis as a VectorDB.Install Redis Python Client​Redis-py is the officially supported client by Redis. Recently released is the RedisVL client which is purpose-built for the Vector Database use cases. Both can be installed with pip.pip install redis redisvl openai tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()Sample Data​First we will describe some sample data so that the various attributes of the Redis vector store can be demonstrated.metadata = [ { "user": "john", "age": 18, "job": "engineer", "credit_score": "high", }, { "user": "derrick", "age": 45, "job": "doctor", "credit_score": "low", }, { "user": "nancy", "age": 94, "job": "doctor", "credit_score": "high", }, { "user": "tyler", "age": 100, "job": "engineer", "credit_score": "high", }, { "user": "joe", "age": 35, "job": "dentist", "credit_score": "medium", },]texts = ["foo", "foo", "foo", "bar", "bar"]Initializing Redis​To locally deploy Redis, run:docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latestIf things are running correctly you should see a nice Redis UI at http://localhost:8001. |
1,951 | see a nice Redis UI at http://localhost:8001. See the Deployment Options section above for other ways to deploy.The Redis VectorStore instance can be initialized in a number of ways. There are multiple class methods that can be used to initialize a Redis VectorStore instance.Redis.__init__ - Initialize directlyRedis.from_documents - Initialize from a list of Langchain.docstore.Document objectsRedis.from_texts - Initialize from a list of texts (optionally with metadata)Redis.from_texts_return_keys - Initialize from a list of texts (optionally with metadata) and return the keysRedis.from_existing_index - Initialize from an existing Redis indexBelow we will use the Redis.from_texts method.from langchain.vectorstores.redis import Redisrds = Redis.from_texts( texts, embeddings, metadatas=metadata, redis_url="redis://localhost:6379", index_name="users")rds.index_name 'users'Inspecting the Created Index​Once the Redis VectorStore object has been constructed, an index will have been created in Redis if it did not already exist. The index can be inspected with both the rvland the redis-cli command line tool. If you installed redisvl above, you can use the rvl command line tool to inspect the index.# assumes you're running Redis locally (use --host, --port, --password, --username, to change this)rvl index listall 16:58:26 [RedisVL] INFO Indices: 16:58:26 [RedisVL] INFO 1. usersThe Redis VectorStore implementation will attempt to generate index schema (fields for filtering) for any metadata passed through the from_texts, from_texts_return_keys, and from_documents methods. This way, whatever metadata is passed will be indexed into the Redis search index allowing | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: see a nice Redis UI at http://localhost:8001. See the Deployment Options section above for other ways to deploy.The Redis VectorStore instance can be initialized in a number of ways. There are multiple class methods that can be used to initialize a Redis VectorStore instance.Redis.__init__ - Initialize directlyRedis.from_documents - Initialize from a list of Langchain.docstore.Document objectsRedis.from_texts - Initialize from a list of texts (optionally with metadata)Redis.from_texts_return_keys - Initialize from a list of texts (optionally with metadata) and return the keysRedis.from_existing_index - Initialize from an existing Redis indexBelow we will use the Redis.from_texts method.from langchain.vectorstores.redis import Redisrds = Redis.from_texts( texts, embeddings, metadatas=metadata, redis_url="redis://localhost:6379", index_name="users")rds.index_name 'users'Inspecting the Created Index​Once the Redis VectorStore object has been constructed, an index will have been created in Redis if it did not already exist. The index can be inspected with both the rvland the redis-cli command line tool. If you installed redisvl above, you can use the rvl command line tool to inspect the index.# assumes you're running Redis locally (use --host, --port, --password, --username, to change this)rvl index listall 16:58:26 [RedisVL] INFO Indices: 16:58:26 [RedisVL] INFO 1. usersThe Redis VectorStore implementation will attempt to generate index schema (fields for filtering) for any metadata passed through the from_texts, from_texts_return_keys, and from_documents methods. This way, whatever metadata is passed will be indexed into the Redis search index allowing |
1,952 | for filtering on those fields.Below we show what fields were created from the metadata we defined abovervl index info -i users Index Information: â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ Index Name │ Storage Type │ Prefixes │ Index Options │ Indexing │ ├──────────────┼────────────────┼───────────────┼─────────────────┼────────────┤ │ users │ HASH │ ['doc:users'] │ [] │ 0 │ ╰──────────────┴────────────────┴───────────────┴─────────────────┴────────────╯ Index Fields: â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ Name │ Attribute │ Type │ Field Option │ Option Value │ ├────────────────┼────────────────┼─────────┼────────────────┼────────────────┤ │ user │ user │ TEXT │ WEIGHT │ 1 │ │ job │ job │ TEXT │ WEIGHT │ 1 │ │ credit_score │ credit_score │ TEXT │ WEIGHT │ 1 │ │ content │ content | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: for filtering on those fields.Below we show what fields were created from the metadata we defined abovervl index info -i users Index Information: â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ Index Name │ Storage Type │ Prefixes │ Index Options │ Indexing │ ├──────────────┼────────────────┼───────────────┼─────────────────┼────────────┤ │ users │ HASH │ ['doc:users'] │ [] │ 0 │ ╰──────────────┴────────────────┴───────────────┴─────────────────┴────────────╯ Index Fields: â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ Name │ Attribute │ Type │ Field Option │ Option Value │ ├────────────────┼────────────────┼─────────┼────────────────┼────────────────┤ │ user │ user │ TEXT │ WEIGHT │ 1 │ │ job │ job │ TEXT │ WEIGHT │ 1 │ │ credit_score │ credit_score │ TEXT │ WEIGHT │ 1 │ │ content │ content |
1,953 | 1 │ │ content │ content │ TEXT │ WEIGHT │ 1 │ │ age │ age │ NUMERIC │ │ │ │ content_vector │ content_vector │ VECTOR │ │ │ ╰────────────────┴────────────────┴─────────┴────────────────┴────────────────╯rvl stats -i users Statistics: â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ Stat Key │ Value │ ├─────────────────────────────┼─────────────┤ │ num_docs │ 5 │ │ num_terms │ 15 │ │ max_doc_id │ 5 │ │ num_records │ 33 │ │ percent_indexed │ 1 │ │ hash_indexing_failures │ 0 │ │ number_of_uses │ 4 │ │ bytes_per_record_avg │ 4.60606 │ │ doc_table_size_mb │ 0.000524521 │ │ inverted_sz_mb │ 0.000144958 │ │ key_table_size_mb │ 0.000193596 │ │ offset_bits_per_record_avg │ 8 │ │ offset_vectors_sz_mb │ 2.19345e-05 │ │ offsets_per_term_avg │ 0.69697 │ │ records_per_doc_avg │ 6.6 │ │ sortable_values_size_mb │ 0 │ │ total_indexing_time │ 0.32 │ │ total_inverted_index_blocks │ 16 │ │ vector_index_sz_mb │ 6.0126 │ | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: 1 │ │ content │ content │ TEXT │ WEIGHT │ 1 │ │ age │ age │ NUMERIC │ │ │ │ content_vector │ content_vector │ VECTOR │ │ │ ╰────────────────┴────────────────┴─────────┴────────────────┴────────────────╯rvl stats -i users Statistics: â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ Stat Key │ Value │ ├─────────────────────────────┼─────────────┤ │ num_docs │ 5 │ │ num_terms │ 15 │ │ max_doc_id │ 5 │ │ num_records │ 33 │ │ percent_indexed │ 1 │ │ hash_indexing_failures │ 0 │ │ number_of_uses │ 4 │ │ bytes_per_record_avg │ 4.60606 │ │ doc_table_size_mb │ 0.000524521 │ │ inverted_sz_mb │ 0.000144958 │ │ key_table_size_mb │ 0.000193596 │ │ offset_bits_per_record_avg │ 8 │ │ offset_vectors_sz_mb │ 2.19345e-05 │ │ offsets_per_term_avg │ 0.69697 │ │ records_per_doc_avg │ 6.6 │ │ sortable_values_size_mb │ 0 │ │ total_indexing_time │ 0.32 │ │ total_inverted_index_blocks │ 16 │ │ vector_index_sz_mb │ 6.0126 │ |
1,954 | │ 6.0126 │ ╰─────────────────────────────┴─────────────╯It's important to note that we have not specified that the user, job, credit_score and age in the metadata should be fields within the index, this is because the Redis VectorStore object automatically generate the index schema from the passed metadata. For more information on the generation of index fields, see the API documentation.Querying​There are multiple ways to query the Redis VectorStore implementation based on what use case you have:similarity_search: Find the most similar vectors to a given vector.similarity_search_with_score: Find the most similar vectors to a given vector and return the vector distancesimilarity_search_limit_score: Find the most similar vectors to a given vector and limit the number of results to the score_thresholdsimilarity_search_with_relevance_scores: Find the most similar vectors to a given vector and return the vector similaritiesmax_marginal_relevance_search: Find the most similar vectors to a given vector while also optimizing for diversityresults = rds.similarity_search("foo")print(results[0].page_content) foo# return metadataresults = rds.similarity_search("foo", k=3)meta = results[1].metadataprint("Key of the document in Redis: ", meta.pop("id"))print("Metadata of the document: ", meta) Key of the document in Redis: doc:users:a70ca43b3a4e4168bae57c78753a200f Metadata of the document: {'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}# with scores (distances)results = rds.similarity_search_with_score("foo", k=5)for result in results: print(f"Content: {result[0].page_content} --- Score: {result[1]}") Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: bar --- Score: 0.1566 Content: bar --- Score: 0.1566# limit the vector distance that can be returnedresults = | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: │ 6.0126 │ ╰─────────────────────────────┴─────────────╯It's important to note that we have not specified that the user, job, credit_score and age in the metadata should be fields within the index, this is because the Redis VectorStore object automatically generate the index schema from the passed metadata. For more information on the generation of index fields, see the API documentation.Querying​There are multiple ways to query the Redis VectorStore implementation based on what use case you have:similarity_search: Find the most similar vectors to a given vector.similarity_search_with_score: Find the most similar vectors to a given vector and return the vector distancesimilarity_search_limit_score: Find the most similar vectors to a given vector and limit the number of results to the score_thresholdsimilarity_search_with_relevance_scores: Find the most similar vectors to a given vector and return the vector similaritiesmax_marginal_relevance_search: Find the most similar vectors to a given vector while also optimizing for diversityresults = rds.similarity_search("foo")print(results[0].page_content) foo# return metadataresults = rds.similarity_search("foo", k=3)meta = results[1].metadataprint("Key of the document in Redis: ", meta.pop("id"))print("Metadata of the document: ", meta) Key of the document in Redis: doc:users:a70ca43b3a4e4168bae57c78753a200f Metadata of the document: {'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}# with scores (distances)results = rds.similarity_search_with_score("foo", k=5)for result in results: print(f"Content: {result[0].page_content} --- Score: {result[1]}") Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: bar --- Score: 0.1566 Content: bar --- Score: 0.1566# limit the vector distance that can be returnedresults = |
1,955 | the vector distance that can be returnedresults = rds.similarity_search_with_score("foo", k=5, distance_threshold=0.1)for result in results: print(f"Content: {result[0].page_content} --- Score: {result[1]}") Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0# with scoresresults = rds.similarity_search_with_relevance_scores("foo", k=5)for result in results: print(f"Content: {result[0].page_content} --- Similiarity: {result[1]}") Content: foo --- Similiarity: 1.0 Content: foo --- Similiarity: 1.0 Content: foo --- Similiarity: 1.0 Content: bar --- Similiarity: 0.8434 Content: bar --- Similiarity: 0.8434# limit scores (similarities have to be over .9)results = rds.similarity_search_with_relevance_scores("foo", k=5, score_threshold=0.9)for result in results: print(f"Content: {result[0].page_content} --- Similarity: {result[1]}") Content: foo --- Similarity: 1.0 Content: foo --- Similarity: 1.0 Content: foo --- Similarity: 1.0# you can also add new documents as followsnew_document = ["baz"]new_metadata = [{ "user": "sam", "age": 50, "job": "janitor", "credit_score": "high"}]# both the document and metadata must be listsrds.add_texts(new_document, new_metadata) ['doc:users:b9c71d62a0a34241a37950b448dafd38']# now query the new documentresults = rds.similarity_search("baz", k=3)print(results[0].metadata) {'id': 'doc:users:b9c71d62a0a34241a37950b448dafd38', 'user': 'sam', 'job': 'janitor', 'credit_score': 'high', 'age': '50'}# use maximal marginal relevance search to diversify resultsresults = rds.max_marginal_relevance_search("foo")# the lambda_mult parameter controls the diversity of the results, the lower the more diverseresults = rds.max_marginal_relevance_search("foo", lambda_mult=0.1)Connect to an Existing Index​In order to have the same metadata indexed when using the Redis VectorStore. You will need to have the same index_schema passed in either as a path to a yaml file or | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: the vector distance that can be returnedresults = rds.similarity_search_with_score("foo", k=5, distance_threshold=0.1)for result in results: print(f"Content: {result[0].page_content} --- Score: {result[1]}") Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0# with scoresresults = rds.similarity_search_with_relevance_scores("foo", k=5)for result in results: print(f"Content: {result[0].page_content} --- Similiarity: {result[1]}") Content: foo --- Similiarity: 1.0 Content: foo --- Similiarity: 1.0 Content: foo --- Similiarity: 1.0 Content: bar --- Similiarity: 0.8434 Content: bar --- Similiarity: 0.8434# limit scores (similarities have to be over .9)results = rds.similarity_search_with_relevance_scores("foo", k=5, score_threshold=0.9)for result in results: print(f"Content: {result[0].page_content} --- Similarity: {result[1]}") Content: foo --- Similarity: 1.0 Content: foo --- Similarity: 1.0 Content: foo --- Similarity: 1.0# you can also add new documents as followsnew_document = ["baz"]new_metadata = [{ "user": "sam", "age": 50, "job": "janitor", "credit_score": "high"}]# both the document and metadata must be listsrds.add_texts(new_document, new_metadata) ['doc:users:b9c71d62a0a34241a37950b448dafd38']# now query the new documentresults = rds.similarity_search("baz", k=3)print(results[0].metadata) {'id': 'doc:users:b9c71d62a0a34241a37950b448dafd38', 'user': 'sam', 'job': 'janitor', 'credit_score': 'high', 'age': '50'}# use maximal marginal relevance search to diversify resultsresults = rds.max_marginal_relevance_search("foo")# the lambda_mult parameter controls the diversity of the results, the lower the more diverseresults = rds.max_marginal_relevance_search("foo", lambda_mult=0.1)Connect to an Existing Index​In order to have the same metadata indexed when using the Redis VectorStore. You will need to have the same index_schema passed in either as a path to a yaml file or |
1,956 | passed in either as a path to a yaml file or as a dictionary. The following shows how to obtain the schema from an index and connect to an existing index.# write the schema to a yaml filerds.write_schema("redis_schema.yaml")The schema file for this example should look something like:numeric:- name: age no_index: false sortable: falsetext:- name: user no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: job no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: credit_score no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: content no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: falsevector:- algorithm: FLAT block_size: 1000 datatype: FLOAT32 dims: 1536 distance_metric: COSINE initial_cap: 20000 name: content_vectorNotice, this include all possible fields for the schema. You can remove any fields that you don't need.# now we can connect to our existing index as followsnew_rds = Redis.from_existing_index( embeddings, index_name="users", redis_url="redis://localhost:6379", schema="redis_schema.yaml")results = new_rds.similarity_search("foo", k=3)print(results[0].metadata) {'id': 'doc:users:8484c48a032d4c4cbe3cc2ed6845fabb', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}# see the schemas are the samenew_rds.schema == rds.schema TrueCustom Metadata Indexing​In some cases, you may want to control what fields the metadata maps to. For example, you may want the credit_score field to be a categorical field instead of a text field (which is the default behavior for all string fields). In this case, you can use the index_schema parameter in each of the initialization methods above to specify the schema for the index. Custom index schema can either be passed as a dictionary or as a path to a yaml file.All arguments in the schema have defaults besides the name, so you can specify only | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: passed in either as a path to a yaml file or as a dictionary. The following shows how to obtain the schema from an index and connect to an existing index.# write the schema to a yaml filerds.write_schema("redis_schema.yaml")The schema file for this example should look something like:numeric:- name: age no_index: false sortable: falsetext:- name: user no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: job no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: credit_score no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: false- name: content no_index: false no_stem: false sortable: false weight: 1 withsuffixtrie: falsevector:- algorithm: FLAT block_size: 1000 datatype: FLOAT32 dims: 1536 distance_metric: COSINE initial_cap: 20000 name: content_vectorNotice, this include all possible fields for the schema. You can remove any fields that you don't need.# now we can connect to our existing index as followsnew_rds = Redis.from_existing_index( embeddings, index_name="users", redis_url="redis://localhost:6379", schema="redis_schema.yaml")results = new_rds.similarity_search("foo", k=3)print(results[0].metadata) {'id': 'doc:users:8484c48a032d4c4cbe3cc2ed6845fabb', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}# see the schemas are the samenew_rds.schema == rds.schema TrueCustom Metadata Indexing​In some cases, you may want to control what fields the metadata maps to. For example, you may want the credit_score field to be a categorical field instead of a text field (which is the default behavior for all string fields). In this case, you can use the index_schema parameter in each of the initialization methods above to specify the schema for the index. Custom index schema can either be passed as a dictionary or as a path to a yaml file.All arguments in the schema have defaults besides the name, so you can specify only |
1,957 | besides the name, so you can specify only the fields you want to change. All the names correspond to the snake/lowercase versions of the arguments you would use on the command line with redis-cli or in redis-py. For more on the arguments for each field, see the documentationThe below example shows how to specify the schema for the credit_score field as a Tag (categorical) field instead of a text field. # index_schema.ymltag: - name: credit_scoretext: - name: user - name: jobnumeric: - name: ageIn Python this would look like:index_schema = { "tag": [{"name": "credit_score"}], "text": [{"name": "user"}, {"name": "job"}], "numeric": [{"name": "age"}],}Notice that only the name field needs to be specified. All other fields have defaults.# create a new index with the new schema defined aboveindex_schema = { "tag": [{"name": "credit_score"}], "text": [{"name": "user"}, {"name": "job"}], "numeric": [{"name": "age"}],}rds, keys = Redis.from_texts_return_keys( texts, embeddings, metadatas=metadata, redis_url="redis://localhost:6379", index_name="users_modified", index_schema=index_schema, # pass in the new index schema) `index_schema` does not match generated metadata schema. If you meant to manually override the schema, please ignore this message. index_schema: {'tag': [{'name': 'credit_score'}], 'text': [{'name': 'user'}, {'name': 'job'}], 'numeric': [{'name': 'age'}]} generated_schema: {'text': [{'name': 'user'}, {'name': 'job'}, {'name': 'credit_score'}], 'numeric': [{'name': 'age'}], 'tag': []} The above warning is meant to notify users when they are overriding the default behavior. Ignore it if you are intentionally overriding the behavior.Hybrid Filtering​With the Redis Filter Expression language built into langchain, you can create arbitrarily long chains of hybrid filters | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: besides the name, so you can specify only the fields you want to change. All the names correspond to the snake/lowercase versions of the arguments you would use on the command line with redis-cli or in redis-py. For more on the arguments for each field, see the documentationThe below example shows how to specify the schema for the credit_score field as a Tag (categorical) field instead of a text field. # index_schema.ymltag: - name: credit_scoretext: - name: user - name: jobnumeric: - name: ageIn Python this would look like:index_schema = { "tag": [{"name": "credit_score"}], "text": [{"name": "user"}, {"name": "job"}], "numeric": [{"name": "age"}],}Notice that only the name field needs to be specified. All other fields have defaults.# create a new index with the new schema defined aboveindex_schema = { "tag": [{"name": "credit_score"}], "text": [{"name": "user"}, {"name": "job"}], "numeric": [{"name": "age"}],}rds, keys = Redis.from_texts_return_keys( texts, embeddings, metadatas=metadata, redis_url="redis://localhost:6379", index_name="users_modified", index_schema=index_schema, # pass in the new index schema) `index_schema` does not match generated metadata schema. If you meant to manually override the schema, please ignore this message. index_schema: {'tag': [{'name': 'credit_score'}], 'text': [{'name': 'user'}, {'name': 'job'}], 'numeric': [{'name': 'age'}]} generated_schema: {'text': [{'name': 'user'}, {'name': 'job'}, {'name': 'credit_score'}], 'numeric': [{'name': 'age'}], 'tag': []} The above warning is meant to notify users when they are overriding the default behavior. Ignore it if you are intentionally overriding the behavior.Hybrid Filtering​With the Redis Filter Expression language built into langchain, you can create arbitrarily long chains of hybrid filters |
1,958 | that can be used to filter your search results. The expression language is derived from the RedisVL Expression Syntax | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: that can be used to filter your search results. The expression language is derived from the RedisVL Expression Syntax |
1,959 | and is designed to be easy to use and understand.The following are the available filter types:RedisText: Filter by full-text search against metadata fields. Supports exact, fuzzy, and wildcard matching.RedisNum: Filter by numeric range against metadata fields.RedisTag: Filter by exact match against string based categorical metadata fields. Multiple tags can be specified like "tag1,tag2,tag3".The following are examples of utilizing these filters.from langchain.vectorstores.redis import RedisText, RedisNum, RedisTag# exact matchinghas_high_credit = RedisTag("credit_score") == "high"does_not_have_high_credit = RedisTag("credit_score") != "low"# fuzzy matchingjob_starts_with_eng = RedisText("job") % "eng*"job_is_engineer = RedisText("job") == "engineer"job_is_not_engineer = RedisText("job") != "engineer"# numeric filteringage_is_18 = RedisNum("age") == 18age_is_not_18 = RedisNum("age") != 18age_is_greater_than_18 = RedisNum("age") > 18age_is_less_than_18 = RedisNum("age") < 18age_is_greater_than_or_equal_to_18 = RedisNum("age") >= 18age_is_less_than_or_equal_to_18 = RedisNum("age") <= 18The RedisFilter class can be used to simplify the import of these filters as followsfrom langchain.vectorstores.redis import RedisFilter# same examples as abovehas_high_credit = RedisFilter.tag("credit_score") == "high"does_not_have_high_credit = RedisFilter.num("age") > 8job_starts_with_eng = RedisFilter.text("job") % "eng*"The following are examples of using hybrid filter for searchfrom langchain.vectorstores.redis import RedisTextis_engineer = RedisText("job") == "engineer"results = rds.similarity_search("foo", k=3, filter=is_engineer)print("Job:", results[0].metadata["job"])print("Engineers in the dataset:", len(results)) Job: engineer Engineers in the dataset: 2# fuzzy matchstarts_with_doc = RedisText("job") % "doc*"results = rds.similarity_search("foo", k=3, filter=starts_with_doc)for result in results: print("Job:", result.metadata["job"])print("Jobs in dataset that | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: and is designed to be easy to use and understand.The following are the available filter types:RedisText: Filter by full-text search against metadata fields. Supports exact, fuzzy, and wildcard matching.RedisNum: Filter by numeric range against metadata fields.RedisTag: Filter by exact match against string based categorical metadata fields. Multiple tags can be specified like "tag1,tag2,tag3".The following are examples of utilizing these filters.from langchain.vectorstores.redis import RedisText, RedisNum, RedisTag# exact matchinghas_high_credit = RedisTag("credit_score") == "high"does_not_have_high_credit = RedisTag("credit_score") != "low"# fuzzy matchingjob_starts_with_eng = RedisText("job") % "eng*"job_is_engineer = RedisText("job") == "engineer"job_is_not_engineer = RedisText("job") != "engineer"# numeric filteringage_is_18 = RedisNum("age") == 18age_is_not_18 = RedisNum("age") != 18age_is_greater_than_18 = RedisNum("age") > 18age_is_less_than_18 = RedisNum("age") < 18age_is_greater_than_or_equal_to_18 = RedisNum("age") >= 18age_is_less_than_or_equal_to_18 = RedisNum("age") <= 18The RedisFilter class can be used to simplify the import of these filters as followsfrom langchain.vectorstores.redis import RedisFilter# same examples as abovehas_high_credit = RedisFilter.tag("credit_score") == "high"does_not_have_high_credit = RedisFilter.num("age") > 8job_starts_with_eng = RedisFilter.text("job") % "eng*"The following are examples of using hybrid filter for searchfrom langchain.vectorstores.redis import RedisTextis_engineer = RedisText("job") == "engineer"results = rds.similarity_search("foo", k=3, filter=is_engineer)print("Job:", results[0].metadata["job"])print("Engineers in the dataset:", len(results)) Job: engineer Engineers in the dataset: 2# fuzzy matchstarts_with_doc = RedisText("job") % "doc*"results = rds.similarity_search("foo", k=3, filter=starts_with_doc)for result in results: print("Job:", result.metadata["job"])print("Jobs in dataset that |
1,960 | in dataset that start with 'doc':", len(results)) Job: doctor Job: doctor Jobs in dataset that start with 'doc': 2from langchain.vectorstores.redis import RedisNumis_over_18 = RedisNum("age") > 18is_under_99 = RedisNum("age") < 99age_range = is_over_18 & is_under_99results = rds.similarity_search("foo", filter=age_range)for result in results: print("User:", result.metadata["user"], "is", result.metadata["age"]) User: derrick is 45 User: nancy is 94 User: joe is 35# make sure to use parenthesis around FilterExpressions# if initializing them while constructing themage_range = (RedisNum("age") > 18) & (RedisNum("age") < 99)results = rds.similarity_search("foo", filter=age_range)for result in results: print("User:", result.metadata["user"], "is", result.metadata["age"]) User: derrick is 45 User: nancy is 94 User: joe is 35Redis as Retriever​Here we go over different options for using the vector store as a retriever.There are three different search methods we can use to do retrieval. By default, it will use semantic similarity.query = "foo"results = rds.similarity_search_with_score(query, k=3, return_metadata=True)for result in results: print("Content:", result[0].page_content, " --- Score: ", result[1]) Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0retriever = rds.as_retriever(search_type="similarity", search_kwargs={"k": 4})docs = retriever.get_relevant_documents(query)docs [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: in dataset that start with 'doc':", len(results)) Job: doctor Job: doctor Jobs in dataset that start with 'doc': 2from langchain.vectorstores.redis import RedisNumis_over_18 = RedisNum("age") > 18is_under_99 = RedisNum("age") < 99age_range = is_over_18 & is_under_99results = rds.similarity_search("foo", filter=age_range)for result in results: print("User:", result.metadata["user"], "is", result.metadata["age"]) User: derrick is 45 User: nancy is 94 User: joe is 35# make sure to use parenthesis around FilterExpressions# if initializing them while constructing themage_range = (RedisNum("age") > 18) & (RedisNum("age") < 99)results = rds.similarity_search("foo", filter=age_range)for result in results: print("User:", result.metadata["user"], "is", result.metadata["age"]) User: derrick is 45 User: nancy is 94 User: joe is 35Redis as Retriever​Here we go over different options for using the vector store as a retriever.There are three different search methods we can use to do retrieval. By default, it will use semantic similarity.query = "foo"results = rds.similarity_search_with_score(query, k=3, return_metadata=True)for result in results: print("Content:", result[0].page_content, " --- Score: ", result[1]) Content: foo --- Score: 0.0 Content: foo --- Score: 0.0 Content: foo --- Score: 0.0retriever = rds.as_retriever(search_type="similarity", search_kwargs={"k": 4})docs = retriever.get_relevant_documents(query)docs [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': |
1,961 | 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'}), Document(page_content='bar', metadata={'id': 'doc:users_modified:01ef6caac12b42c28ad870aefe574253', 'user': 'tyler', 'job': 'engineer', 'credit_score': 'high', 'age': '100'})]There is also the similarity_distance_threshold retriever which allows the user to specify the vector distanceretriever = rds.as_retriever(search_type="similarity_distance_threshold", search_kwargs={"k": 4, "distance_threshold": 0.1})docs = retriever.get_relevant_documents(query)docs [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'})]Lastly, the similarity_score_threshold allows the user to define the minimum score for similar documentsretriever = rds.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.9, "k": 10})retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'})]retriever = rds.as_retriever(search_type="mmr", search_kwargs={"fetch_k": 20, "k": 4, "lambda_mult": | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'}), Document(page_content='bar', metadata={'id': 'doc:users_modified:01ef6caac12b42c28ad870aefe574253', 'user': 'tyler', 'job': 'engineer', 'credit_score': 'high', 'age': '100'})]There is also the similarity_distance_threshold retriever which allows the user to specify the vector distanceretriever = rds.as_retriever(search_type="similarity_distance_threshold", search_kwargs={"k": 4, "distance_threshold": 0.1})docs = retriever.get_relevant_documents(query)docs [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'})]Lastly, the similarity_score_threshold allows the user to define the minimum score for similar documentsretriever = rds.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.9, "k": 10})retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={'id': 'doc:users_modified:988ecca7574048e396756efc0e79aeca', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:009b1afeb4084cc6bdef858c7a99b48e', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'}), Document(page_content='foo', metadata={'id': 'doc:users_modified:7087cee9be5b4eca93c30fbdd09a2731', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'})]retriever = rds.as_retriever(search_type="mmr", search_kwargs={"fetch_k": 20, "k": 4, "lambda_mult": |
1,962 | 20, "k": 4, "lambda_mult": 0.1})retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={'id': 'doc:users:8f6b673b390647809d510112cde01a27', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='bar', metadata={'id': 'doc:users:93521560735d42328b48c9c6f6418d6a', 'user': 'tyler', 'job': 'engineer', 'credit_score': 'high', 'age': '100'}), Document(page_content='foo', metadata={'id': 'doc:users:125ecd39d07845eabf1a699d44134a5b', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'}), Document(page_content='foo', metadata={'id': 'doc:users:d6200ab3764c466082fde3eaab972a2a', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'})]Delete keysTo delete your entries you have to address them by their keys.Redis.delete(keys, redis_url="redis://localhost:6379") True# delete the indices tooRedis.drop_index(index_name="users", delete_documents=True, redis_url="redis://localhost:6379")Redis.drop_index(index_name="users_modified", delete_documents=True, redis_url="redis://localhost:6379") TrueRedis connection Url examples​Valid Redis Url scheme are:redis:// - Connection to Redis standalone, unencryptedrediss:// - Connection to Redis standalone, with TLS encryptionredis+sentinel:// - Connection to Redis server via Redis Sentinel, unencryptedrediss+sentinel:// - Connection to Redis server via Redis Sentinel, booth connections with TLS encryptionMore information about additional connection parameter can be found in the redis-py documentation at https://redis-py.readthedocs.io/en/stable/connections.html# connection to redis standalone at localhost, db 0, no passwordredis_url = "redis://localhost:6379"# connection to host "redis" port 7379 with db 2 and password "secret" (old style authentication scheme without username / pre 6.x)redis_url = "redis://:secret@redis:7379/2"# connection to host redis on default port with user "joe", pass "secret" using redis | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: 20, "k": 4, "lambda_mult": 0.1})retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={'id': 'doc:users:8f6b673b390647809d510112cde01a27', 'user': 'john', 'job': 'engineer', 'credit_score': 'high', 'age': '18'}), Document(page_content='bar', metadata={'id': 'doc:users:93521560735d42328b48c9c6f6418d6a', 'user': 'tyler', 'job': 'engineer', 'credit_score': 'high', 'age': '100'}), Document(page_content='foo', metadata={'id': 'doc:users:125ecd39d07845eabf1a699d44134a5b', 'user': 'nancy', 'job': 'doctor', 'credit_score': 'high', 'age': '94'}), Document(page_content='foo', metadata={'id': 'doc:users:d6200ab3764c466082fde3eaab972a2a', 'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '45'})]Delete keysTo delete your entries you have to address them by their keys.Redis.delete(keys, redis_url="redis://localhost:6379") True# delete the indices tooRedis.drop_index(index_name="users", delete_documents=True, redis_url="redis://localhost:6379")Redis.drop_index(index_name="users_modified", delete_documents=True, redis_url="redis://localhost:6379") TrueRedis connection Url examples​Valid Redis Url scheme are:redis:// - Connection to Redis standalone, unencryptedrediss:// - Connection to Redis standalone, with TLS encryptionredis+sentinel:// - Connection to Redis server via Redis Sentinel, unencryptedrediss+sentinel:// - Connection to Redis server via Redis Sentinel, booth connections with TLS encryptionMore information about additional connection parameter can be found in the redis-py documentation at https://redis-py.readthedocs.io/en/stable/connections.html# connection to redis standalone at localhost, db 0, no passwordredis_url = "redis://localhost:6379"# connection to host "redis" port 7379 with db 2 and password "secret" (old style authentication scheme without username / pre 6.x)redis_url = "redis://:secret@redis:7379/2"# connection to host redis on default port with user "joe", pass "secret" using redis |
1,963 | port with user "joe", pass "secret" using redis version 6+ ACLsredis_url = "redis://joe:secret@redis/0"# connection to sentinel at localhost with default group mymaster and db 0, no passwordredis_url = "redis+sentinel://localhost:26379"# connection to sentinel at host redis with default port 26379 and user "joe" with password "secret" with default group mymaster and db 0redis_url = "redis+sentinel://joe:secret@redis"# connection to sentinel, no auth with sentinel monitoring group "zone-1" and database 2redis_url = "redis+sentinel://redis:26379/zone-1/2"# connection to redis standalone at localhost, db 0, no password but with TLS supportredis_url = "rediss://localhost:6379"# connection to redis sentinel at localhost and default port, db 0, no password# but with TLS support for booth Sentinel and Redis serverredis_url = "rediss+sentinel://localhost"PreviousQdrantNextRocksetWhat is Redis?Redis as a Vector DatabaseClientsDeployment OptionsExamplesMore ResourcesInstall Redis Python ClientSample DataInitializing RedisInspecting the Created IndexQueryingConnect to an Existing IndexCustom Metadata IndexingHybrid FilteringRedis as RetrieverRedis connection Url examplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Redis vector database introduction and langchain integration guide. | Redis vector database introduction and langchain integration guide. ->: port with user "joe", pass "secret" using redis version 6+ ACLsredis_url = "redis://joe:secret@redis/0"# connection to sentinel at localhost with default group mymaster and db 0, no passwordredis_url = "redis+sentinel://localhost:26379"# connection to sentinel at host redis with default port 26379 and user "joe" with password "secret" with default group mymaster and db 0redis_url = "redis+sentinel://joe:secret@redis"# connection to sentinel, no auth with sentinel monitoring group "zone-1" and database 2redis_url = "redis+sentinel://redis:26379/zone-1/2"# connection to redis standalone at localhost, db 0, no password but with TLS supportredis_url = "rediss://localhost:6379"# connection to redis sentinel at localhost and default port, db 0, no password# but with TLS support for booth Sentinel and Redis serverredis_url = "rediss+sentinel://localhost"PreviousQdrantNextRocksetWhat is Redis?Redis as a Vector DatabaseClientsDeployment OptionsExamplesMore ResourcesInstall Redis Python ClientSample DataInitializing RedisInspecting the Created IndexQueryingConnect to an Existing IndexCustom Metadata IndexingHybrid FilteringRedis as RetrieverRedis connection Url examplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,964 | SingleStoreDB | ü¶úÔ∏èüîó Langchain | SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching. | SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching. ->: SingleStoreDB | ü¶úÔ∏èüîó Langchain |
1,965 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesSingleStoreDBSingleStoreDBSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching. This tutorial illustrates how to work with vector data in SingleStoreDB.# Establishing a connection to the database is facilitated through the singlestoredb Python connector.# Please ensure that this connector is installed in your working environment.pip install singlestoredbimport osimport getpass# We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SingleStoreDBfrom langchain.document_loaders import TextLoader# Load text samplesloader = | SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching. | SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesSingleStoreDBSingleStoreDBSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching. This tutorial illustrates how to work with vector data in SingleStoreDB.# Establishing a connection to the database is facilitated through the singlestoredb Python connector.# Please ensure that this connector is installed in your working environment.pip install singlestoredbimport osimport getpass# We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SingleStoreDBfrom langchain.document_loaders import TextLoader# Load text samplesloader = |
1,966 | import TextLoader# Load text samplesloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()There are several ways to establish a connection to the database. You can either set up environment variables or pass named parameters to the SingleStoreDB constructor. Alternatively, you may provide these parameters to the from_documents and from_texts methods.# Setup connection url as environment variableos.environ["SINGLESTOREDB_URL"] = "root:pass@localhost:3306/db"# Load documents to the storedocsearch = SingleStoreDB.from_documents( docs, embeddings, table_name="notebook", # use table with a custom name)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) # Find documents that correspond to the queryprint(docs[0].page_content)PreviousScaNNNextscikit-learnCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching. | SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching. ->: import TextLoader# Load text samplesloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()There are several ways to establish a connection to the database. You can either set up environment variables or pass named parameters to the SingleStoreDB constructor. Alternatively, you may provide these parameters to the from_documents and from_texts methods.# Setup connection url as environment variableos.environ["SINGLESTOREDB_URL"] = "root:pass@localhost:3306/db"# Load documents to the storedocsearch = SingleStoreDB.from_documents( docs, embeddings, table_name="notebook", # use table with a custom name)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) # Find documents that correspond to the queryprint(docs[0].page_content)PreviousScaNNNextscikit-learnCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,967 | Vald | ü¶úÔ∏èüîó Langchain | Vald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine. | Vald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine. ->: Vald | ü¶úÔ∏èüîó Langchain |
1,968 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesValdOn this pageValdVald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine.This notebook shows how to use functionality related to the Vald database.To run this notebook you need a running Vald cluster. | Vald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine. | Vald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesValdOn this pageValdVald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine.This notebook shows how to use functionality related to the Vald database.To run this notebook you need a running Vald cluster. |
1,969 | Check Get Started for more information.See the installation instructions.pip install vald-client-pythonBasic Example​from langchain.document_loaders import TextLoaderfrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Valdraw_documents = TextLoader('state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)embeddings = HuggingFaceEmbeddings()db = Vald.from_documents(documents, embeddings, host="localhost", port=8080)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)docs[0].page_contentSimilarity search by vector​embedding_vector = embeddings.embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)docs[0].page_contentSimilarity search with score​docs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0]Maximal Marginal Relevance Search (MMR)​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = db.as_retriever(search_type="mmr")retriever.get_relevant_documents(query)Or use max_marginal_relevance_search directly:db.max_marginal_relevance_search(query, k=2, fetch_k=10)PreviousUSearchNextvearchBasic ExampleSimilarity search by vectorSimilarity search with scoreMaximal Marginal Relevance Search (MMR)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Vald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine. | Vald is a highly scalable distributed fast approximate nearest neighbor (ANN) dense vector search engine. ->: Check Get Started for more information.See the installation instructions.pip install vald-client-pythonBasic Example​from langchain.document_loaders import TextLoaderfrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Valdraw_documents = TextLoader('state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)embeddings = HuggingFaceEmbeddings()db = Vald.from_documents(documents, embeddings, host="localhost", port=8080)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)docs[0].page_contentSimilarity search by vector​embedding_vector = embeddings.embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)docs[0].page_contentSimilarity search with score​docs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0]Maximal Marginal Relevance Search (MMR)​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = db.as_retriever(search_type="mmr")retriever.get_relevant_documents(query)Or use max_marginal_relevance_search directly:db.max_marginal_relevance_search(query, k=2, fetch_k=10)PreviousUSearchNextvearchBasic ExampleSimilarity search by vectorSimilarity search with scoreMaximal Marginal Relevance Search (MMR)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,970 | LLMRails | ü¶úÔ∏èüîó Langchain | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: LLMRails | ü¶úÔ∏èüîó Langchain |
1,971 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesLLMRailsOn this pageLLMRailsLLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy.
See the LLMRails API documentation for more information on how to use the API.This notebook shows how to use functionality related to the LLMRails's integration with langchain. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesLLMRailsOn this pageLLMRailsLLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy.
See the LLMRails API documentation for more information on how to use the API.This notebook shows how to use functionality related to the LLMRails's integration with langchain. |
1,972 | Note that unlike many other integrations in this category, LLMRails provides an end-to-end managed service for retrieval augmented generation, which includes:A way to extract text from document files and chunk them into sentences.Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the LLMRails internal vector storeA query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search)All of these are supported in this LangChain integration.SetupYou will need a LLMRails account to use LLMRails with LangChain. To get started, use the following steps:Sign up for a LLMRails account if you don't already have one.Next you'll need to create API keys to access the API. Click on the "API Keys" tab in the corpus view and then the "Create API Key" button. Give your key a name. Click "Create key" and you now have an active API key. Keep this key confidential. To use LangChain with LLMRails, you'll need to have this value: api_key. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: Note that unlike many other integrations in this category, LLMRails provides an end-to-end managed service for retrieval augmented generation, which includes:A way to extract text from document files and chunk them into sentences.Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the LLMRails internal vector storeA query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search)All of these are supported in this LangChain integration.SetupYou will need a LLMRails account to use LLMRails with LangChain. To get started, use the following steps:Sign up for a LLMRails account if you don't already have one.Next you'll need to create API keys to access the API. Click on the "API Keys" tab in the corpus view and then the "Create API Key" button. Give your key a name. Click "Create key" and you now have an active API key. Keep this key confidential. To use LangChain with LLMRails, you'll need to have this value: api_key. |
1,973 | You can provide those to LangChain in two ways:Include in your environment these two variables: LLM_RAILS_API_KEY, LLM_RAILS_DATASTORE_ID.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["LLM_RAILS_API_KEY"] = getpass.getpass("LLMRails API Key:")os.environ["LLM_RAILS_DATASTORE_ID"] = getpass.getpass("LLMRails Datastore Id:")Provide them as arguments when creating the LLMRails vectorstore object:vectorstore = LLMRails( api_key=llm_rails_api_key, datastore_id=datastore_id)Adding text​For adding text to your datastore first you have to go to Datastores page and create one. Click Create Datastore button and choose a name and embedding model for your datastore. Then get your datastore id from newly created datatore settings.from langchain.vectorstores import LLMRailsimport osos.environ['LLM_RAILS_DATASTORE_ID'] = 'Your datastore id 'os.environ['LLM_RAILS_API_KEY'] = 'Your API Key'llm_rails = LLMRails.from_texts(['Your text here'])Similarity search​The simplest scenario for using LLMRails is to perform a similarity search. query = "What do you plan to do about national security?"found_docs = llm_rails.similarity_search( query, k=5)print(found_docs[0].page_content) Others may not be democratic but nevertheless depend upon a rules-based international system. Yet what we share in common, and the prospect of a freer and more open world, makes such a broad coalition necessary and worthwhile. We will listen to and consider ideas that our partners suggest about how to do this. Building this inclusive coalition requires reinforcing the multilateral system to uphold the founding principles of the United Nations, including respect for international law. 141 countries expressed support at the United Nations General Assembly for a resolution condemning Russia’s unprovoked aggression against Ukraine. We continue to demonstrate this approach by engaging all regions | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: You can provide those to LangChain in two ways:Include in your environment these two variables: LLM_RAILS_API_KEY, LLM_RAILS_DATASTORE_ID.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["LLM_RAILS_API_KEY"] = getpass.getpass("LLMRails API Key:")os.environ["LLM_RAILS_DATASTORE_ID"] = getpass.getpass("LLMRails Datastore Id:")Provide them as arguments when creating the LLMRails vectorstore object:vectorstore = LLMRails( api_key=llm_rails_api_key, datastore_id=datastore_id)Adding text​For adding text to your datastore first you have to go to Datastores page and create one. Click Create Datastore button and choose a name and embedding model for your datastore. Then get your datastore id from newly created datatore settings.from langchain.vectorstores import LLMRailsimport osos.environ['LLM_RAILS_DATASTORE_ID'] = 'Your datastore id 'os.environ['LLM_RAILS_API_KEY'] = 'Your API Key'llm_rails = LLMRails.from_texts(['Your text here'])Similarity search​The simplest scenario for using LLMRails is to perform a similarity search. query = "What do you plan to do about national security?"found_docs = llm_rails.similarity_search( query, k=5)print(found_docs[0].page_content) Others may not be democratic but nevertheless depend upon a rules-based international system. Yet what we share in common, and the prospect of a freer and more open world, makes such a broad coalition necessary and worthwhile. We will listen to and consider ideas that our partners suggest about how to do this. Building this inclusive coalition requires reinforcing the multilateral system to uphold the founding principles of the United Nations, including respect for international law. 141 countries expressed support at the United Nations General Assembly for a resolution condemning Russia’s unprovoked aggression against Ukraine. We continue to demonstrate this approach by engaging all regions |
1,974 | demonstrate this approach by engaging all regions across all issues, not in terms of what we are against but what we are for. This year, we partnered with ASEAN to advance clean energy infrastructure and maritime security in the region. We kickstarted the Prosper Africa Build Together Campaign to fuel economic growth across the continent and bolster trade and investment in the clean energy, health, and digital technology sectors. We are working to develop a partnership with countries on the Atlantic Ocean to establish and carry out a shared approach to advancing our joint development, economic, environmental, scientific, and maritime governance goals. We galvanized regional action to address the core challenges facing the Western Hemisphere by spearheading the Americas Partnership for Economic Prosperity to drive economic recovery and by mobilizing the region behind a bold and unprecedented approach to migration through the Los Angeles Declaration on Migration and Protection. In the Middle East, we have worked to enhance deterrence toward Iran, de-escalate regional conflicts, deepen integration among a diverse set of partners in the region, and bolster energy stability. A prime example of an inclusive coalition is IPEF, which we launched alongside a dozen regional partners that represent 40 percent of the world’s GDP.Similarity search with score​Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.query = "What is your approach to national defense"found_docs = llm_rails.similarity_search_with_score( query, k=5,)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: demonstrate this approach by engaging all regions across all issues, not in terms of what we are against but what we are for. This year, we partnered with ASEAN to advance clean energy infrastructure and maritime security in the region. We kickstarted the Prosper Africa Build Together Campaign to fuel economic growth across the continent and bolster trade and investment in the clean energy, health, and digital technology sectors. We are working to develop a partnership with countries on the Atlantic Ocean to establish and carry out a shared approach to advancing our joint development, economic, environmental, scientific, and maritime governance goals. We galvanized regional action to address the core challenges facing the Western Hemisphere by spearheading the Americas Partnership for Economic Prosperity to drive economic recovery and by mobilizing the region behind a bold and unprecedented approach to migration through the Los Angeles Declaration on Migration and Protection. In the Middle East, we have worked to enhance deterrence toward Iran, de-escalate regional conflicts, deepen integration among a diverse set of partners in the region, and bolster energy stability. A prime example of an inclusive coalition is IPEF, which we launched alongside a dozen regional partners that represent 40 percent of the world’s GDP.Similarity search with score​Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.query = "What is your approach to national defense"found_docs = llm_rails.similarity_search_with_score( query, k=5,)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people. |
1,975 | informed consent of the American people. Our approach to national defense is described in detail in the 2022 National Defense Strategy. Our starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests. Amid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors. The military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge. We will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail. To do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22). We will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned priorities. And, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come. We ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge. 20 NATIONAL SECURITY STRATEGY Page 21  A combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict. Score: 0.5040982687179959LLMRails as a Retriever​LLMRails, as all the other LangChain | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: informed consent of the American people. Our approach to national defense is described in detail in the 2022 National Defense Strategy. Our starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests. Amid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors. The military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge. We will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail. To do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22). We will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned priorities. And, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come. We ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge. 20 NATIONAL SECURITY STRATEGY Page 21  A combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict. Score: 0.5040982687179959LLMRails as a Retriever​LLMRails, as all the other LangChain |
1,976 | Retriever​LLMRails, as all the other LangChain vectorstores, is most often used as a LangChain Retriever:retriever = llm_rails.as_retriever()retriever LLMRailsRetriever(tags=None, metadata=None, vectorstore=<langchain.vectorstores.llm_rails.LLMRails object at 0x107b9c040>, search_type='similarity', search_kwargs={'k': 5})query = "What is your approach to national defense"retriever.get_relevant_documents(query)[0] Document(page_content='But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people.\n\nOur approach to national defense is described in detail in the 2022 National Defense Strategy.\n\nOur starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests.\n\nAmid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors.\n\nThe military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge.\n\nWe will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail.\n\nTo do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22).\n\nWe will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: Retriever​LLMRails, as all the other LangChain vectorstores, is most often used as a LangChain Retriever:retriever = llm_rails.as_retriever()retriever LLMRailsRetriever(tags=None, metadata=None, vectorstore=<langchain.vectorstores.llm_rails.LLMRails object at 0x107b9c040>, search_type='similarity', search_kwargs={'k': 5})query = "What is your approach to national defense"retriever.get_relevant_documents(query)[0] Document(page_content='But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people.\n\nOur approach to national defense is described in detail in the 2022 National Defense Strategy.\n\nOur starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests.\n\nAmid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors.\n\nThe military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge.\n\nWe will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail.\n\nTo do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22).\n\nWe will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned |
1,977 | military activities to advance strategy-aligned priorities.\n\nAnd, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come.\n\nWe ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge.\n\n20 NATIONAL SECURITY STRATEGY Page 21 \x90\x90\x90\x90\x90\x90\n\nA combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict.', metadata={'type': 'file', 'url': 'https://cdn.llmrails.com/dst_d94b490c-4638-4247-ad5e-9aa0e7ef53c1/c2d63a2ea3cd406cb522f8312bc1535d', 'name': 'Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf'})PreviousLanceDBNextMarqoAdding textSimilarity searchSimilarity search with scoreLLMRails as a RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. | LLMRails is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by LLMRails and is optimized for performance and accuracy. ->: military activities to advance strategy-aligned priorities.\n\nAnd, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come.\n\nWe ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge.\n\n20 NATIONAL SECURITY STRATEGY Page 21 \x90\x90\x90\x90\x90\x90\n\nA combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict.', metadata={'type': 'file', 'url': 'https://cdn.llmrails.com/dst_d94b490c-4638-4247-ad5e-9aa0e7ef53c1/c2d63a2ea3cd406cb522f8312bc1535d', 'name': 'Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf'})PreviousLanceDBNextMarqoAdding textSimilarity searchSimilarity search with scoreLLMRails as a RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,978 | Elasticsearch | ü¶úÔ∏èüîó Langchain | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: Elasticsearch | ü¶úÔ∏èüîó Langchain |
1,979 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesElasticsearchOn this pageElasticsearchElasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. This notebook shows how to use functionality related to the Elasticsearch database.pip install elasticsearch openai tiktoken langchainRunning and connecting to Elasticsearch‚ÄãThere are two main ways to setup an Elasticsearch instance for use with:Elastic Cloud: Elastic Cloud is a managed Elasticsearch service. Signup for a free trial.To connect to an Elasticsearch instance that does not require
login credentials (starting the docker instance with security enabled), pass the Elasticsearch URL and index name along with the | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesElasticsearchOn this pageElasticsearchElasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. This notebook shows how to use functionality related to the Elasticsearch database.pip install elasticsearch openai tiktoken langchainRunning and connecting to Elasticsearch‚ÄãThere are two main ways to setup an Elasticsearch instance for use with:Elastic Cloud: Elastic Cloud is a managed Elasticsearch service. Signup for a free trial.To connect to an Elasticsearch instance that does not require
login credentials (starting the docker instance with security enabled), pass the Elasticsearch URL and index name along with the |
1,980 | embedding object to the constructor.Local Install Elasticsearch: Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the Elasticsearch Docker documentation for more information.Running Elasticsearch via Docker‚ÄãExample: Run a single-node Elasticsearch instance with security disabled. This is not recommended for production use. docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0Once the Elasticsearch instance is running, you can connect to it using the Elasticsearch URL and index name along with the embedding object to the constructor.Example: from langchain.vectorstores.elasticsearch import ElasticsearchStore from langchain.embeddings.openai import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding )Authentication‚ÄãFor production, we recommend you run with security enabled. To connect with login credentials, you can use the parameters api_key or es_user and es_password.Example: from langchain.vectorstores import ElasticsearchStore from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, es_user="elastic", es_password="changeme" )How to obtain a password for the default "elastic" user?‚ÄãTo obtain your Elastic Cloud password for the default "elastic" user:Log in to the Elastic Cloud console at https://cloud.elastic.coGo to "Security" > "Users"Locate the "elastic" user and click "Edit"Click "Reset password"Follow the prompts to reset the | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: embedding object to the constructor.Local Install Elasticsearch: Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the Elasticsearch Docker documentation for more information.Running Elasticsearch via Docker‚ÄãExample: Run a single-node Elasticsearch instance with security disabled. This is not recommended for production use. docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0Once the Elasticsearch instance is running, you can connect to it using the Elasticsearch URL and index name along with the embedding object to the constructor.Example: from langchain.vectorstores.elasticsearch import ElasticsearchStore from langchain.embeddings.openai import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding )Authentication‚ÄãFor production, we recommend you run with security enabled. To connect with login credentials, you can use the parameters api_key or es_user and es_password.Example: from langchain.vectorstores import ElasticsearchStore from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, es_user="elastic", es_password="changeme" )How to obtain a password for the default "elastic" user?‚ÄãTo obtain your Elastic Cloud password for the default "elastic" user:Log in to the Elastic Cloud console at https://cloud.elastic.coGo to "Security" > "Users"Locate the "elastic" user and click "Edit"Click "Reset password"Follow the prompts to reset the |
1,981 | "Reset password"Follow the prompts to reset the passwordHow to obtain an API key?‚ÄãTo obtain an API key:Log in to the Elastic Cloud console at https://cloud.elastic.coOpen Kibana and go to Stack Management > API KeysClick "Create API key"Enter a name for the API key and click "Create"Copy the API key and paste it into the api_key parameterElastic Cloud‚ÄãTo connect to an Elasticsearch instance on Elastic Cloud, you can use either the es_cloud_id parameter or es_url.Example: from langchain.vectorstores.elasticsearch import ElasticsearchStore from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_cloud_id="<cloud_id>", index_name="test_index", embedding=embedding, es_user="elastic", es_password="changeme" )We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Basic Example‚ÄãThis example we are going to load "state_of_the_union.txt" via the TextLoader, chunk the text into 500 word chunks, and then index each chunk into Elasticsearch.Once the data is indexed, we perform a simple query to find the top 4 chunks that similar to the query "What did the president say about Ketanji Brown Jackson".Elasticsearch is running locally on localhost:9200 with docker. For more details on how to connect to Elasticsearch from Elastic Cloud, see connecting with authentication above.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import ElasticsearchStorefrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: "Reset password"Follow the prompts to reset the passwordHow to obtain an API key?‚ÄãTo obtain an API key:Log in to the Elastic Cloud console at https://cloud.elastic.coOpen Kibana and go to Stack Management > API KeysClick "Create API key"Enter a name for the API key and click "Create"Copy the API key and paste it into the api_key parameterElastic Cloud‚ÄãTo connect to an Elasticsearch instance on Elastic Cloud, you can use either the es_cloud_id parameter or es_url.Example: from langchain.vectorstores.elasticsearch import ElasticsearchStore from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticsearchStore( es_cloud_id="<cloud_id>", index_name="test_index", embedding=embedding, es_user="elastic", es_password="changeme" )We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Basic Example‚ÄãThis example we are going to load "state_of_the_union.txt" via the TextLoader, chunk the text into 500 word chunks, and then index each chunk into Elasticsearch.Once the data is indexed, we perform a simple query to find the top 4 chunks that similar to the query "What did the president say about Ketanji Brown Jackson".Elasticsearch is running locally on localhost:9200 with docker. For more details on how to connect to Elasticsearch from Elastic Cloud, see connecting with authentication above.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import ElasticsearchStorefrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db |
1,982 | = OpenAIEmbeddings()db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test-basic", )db.client.indices.refresh(index="test-basic")query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)print(results) [Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'}), Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'date': '2016-01-01', 'rating': 2, 'author': 'John Doe'}), Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'date': '2010-01-01', 'rating': 1, 'author': 'John Doe'}), Document(page_content='As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: = OpenAIEmbeddings()db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test-basic", )db.client.indices.refresh(index="test-basic")query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)print(results) [Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt'}), Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'date': '2016-01-01', 'rating': 2, 'author': 'John Doe'}), Document(page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../modules/state_of_the_union.txt', 'date': '2010-01-01', 'rating': 1, 'author': 'John Doe'}), Document(page_content='As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into |
1,983 | isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.', metadata={'source': '../../modules/state_of_the_union.txt'})]MetadataElasticsearchStore supports metadata to stored along with the document. This metadata dict object is stored in a metadata object field in the Elasticsearch document. Based on the metadata value, Elasticsearch will automatically setup the mapping by infering the data type of the metadata value. For example, if the metadata value is a string, Elasticsearch will setup the mapping for the metadata object field as a string type.# Adding metadata to documentsfor i, doc in enumerate(docs): doc.metadata["date"] = f"{range(2010, 2020)[i % 10]}-01-01" doc.metadata["rating"] = range(1, 6)[i % 5] doc.metadata["author"] = ["John Doe", "Jane Doe"][i % 2]db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test-metadata")query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2016-01-01', 'rating': 2, 'author': 'John Doe'}Filtering Metadata​With metadata added to the documents, you can add metadata filtering at query time. Example: Filter by keyword​docs = db.similarity_search(query, filter=[{ "match": { "metadata.author": "John Doe"}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2010-01-01', 'rating': 1, 'author': 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Date Range​docs = db.similarity_search("Any mention about Fred?", filter=[{ "range": { "metadata.date": { "gte": "2010-01-01" }}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2012-01-01', 'rating': 3, 'author': 'John Doe', 'geo_location': {'lat': 40.12, | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.', metadata={'source': '../../modules/state_of_the_union.txt'})]MetadataElasticsearchStore supports metadata to stored along with the document. This metadata dict object is stored in a metadata object field in the Elasticsearch document. Based on the metadata value, Elasticsearch will automatically setup the mapping by infering the data type of the metadata value. For example, if the metadata value is a string, Elasticsearch will setup the mapping for the metadata object field as a string type.# Adding metadata to documentsfor i, doc in enumerate(docs): doc.metadata["date"] = f"{range(2010, 2020)[i % 10]}-01-01" doc.metadata["rating"] = range(1, 6)[i % 5] doc.metadata["author"] = ["John Doe", "Jane Doe"][i % 2]db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test-metadata")query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2016-01-01', 'rating': 2, 'author': 'John Doe'}Filtering Metadata​With metadata added to the documents, you can add metadata filtering at query time. Example: Filter by keyword​docs = db.similarity_search(query, filter=[{ "match": { "metadata.author": "John Doe"}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2010-01-01', 'rating': 1, 'author': 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Date Range​docs = db.similarity_search("Any mention about Fred?", filter=[{ "range": { "metadata.date": { "gte": "2010-01-01" }}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2012-01-01', 'rating': 3, 'author': 'John Doe', 'geo_location': {'lat': 40.12, |
1,984 | 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Numeric Range‚Äãdocs = db.similarity_search("Any mention about Fred?", filter=[{ "range": { "metadata.rating": { "gte": 2 }}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2012-01-01', 'rating': 3, 'author': 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Geo Distance‚ÄãRequires an index with a geo_point mapping to be declared for metadata.geo_location.docs = db.similarity_search("Any mention about Fred?", filter=[{ "geo_distance": { "distance": "200km", "metadata.geo_location": { "lat": 40, "lon": -70 } } }])print(docs[0].metadata)Filter supports many more types of queries than above. Read more about them in the documentation.Distance Similarity AlgorithmElasticsearch supports the following vector distance similarity algorithms:cosineeuclideandot_productThe cosine similarity algorithm is the default.You can specify the similarity Algorithm needed via the similarity parameter.NOTE | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Numeric Range‚Äãdocs = db.similarity_search("Any mention about Fred?", filter=[{ "range": { "metadata.rating": { "gte": 2 }}}])print(docs[0].metadata) {'source': '../../modules/state_of_the_union.txt', 'date': '2012-01-01', 'rating': 3, 'author': 'John Doe', 'geo_location': {'lat': 40.12, 'lon': -71.34}}Example: Filter by Geo Distance‚ÄãRequires an index with a geo_point mapping to be declared for metadata.geo_location.docs = db.similarity_search("Any mention about Fred?", filter=[{ "geo_distance": { "distance": "200km", "metadata.geo_location": { "lat": 40, "lon": -70 } } }])print(docs[0].metadata)Filter supports many more types of queries than above. Read more about them in the documentation.Distance Similarity AlgorithmElasticsearch supports the following vector distance similarity algorithms:cosineeuclideandot_productThe cosine similarity algorithm is the default.You can specify the similarity Algorithm needed via the similarity parameter.NOTE |
1,985 | Depending on the retrieval strategy, the similarity algorithm cannot be changed at query time. It is needed to be set when creating the index mapping for field. If you need to change the similarity algorithm, you need to delete the index and recreate it with the correct distance_strategy.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", distance_strategy="COSINE" # distance_strategy="EUCLIDEAN_DISTANCE" # distance_strategy="DOT_PRODUCT")Retrieval StrategiesElasticsearch has big advantages over other vector only databases from its ability to support a wide range of retrieval strategies. In this notebook we will configure ElasticsearchStore to support some of the most common retrieval strategies. By default, ElasticsearchStore uses the ApproxRetrievalStrategy.ApproxRetrievalStrategy‚ÄãThis will return the top k most similar vectors to the query vector. The k parameter is set when the ElasticsearchStore is initialized. The default value is 10.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ApproxRetrievalStrategy())docs = db.similarity_search(query="What did the president say about Ketanji Brown Jackson?", k=10)Example: Approx with hybrid‚ÄãThis example will show how to configure ElasticsearchStore to perform a hybrid retrieval, using a combination of approximate semantic search and keyword based search. We use RRF to balance the two scores from different retrieval methods.To enable hybrid retrieval, we need to set hybrid=True in ElasticsearchStore ApproxRetrievalStrategy constructor.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ApproxRetrievalStrategy( hybrid=True, ))When hybrid is enabled, the query performed will be a combination of approximate semantic | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: Depending on the retrieval strategy, the similarity algorithm cannot be changed at query time. It is needed to be set when creating the index mapping for field. If you need to change the similarity algorithm, you need to delete the index and recreate it with the correct distance_strategy.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", distance_strategy="COSINE" # distance_strategy="EUCLIDEAN_DISTANCE" # distance_strategy="DOT_PRODUCT")Retrieval StrategiesElasticsearch has big advantages over other vector only databases from its ability to support a wide range of retrieval strategies. In this notebook we will configure ElasticsearchStore to support some of the most common retrieval strategies. By default, ElasticsearchStore uses the ApproxRetrievalStrategy.ApproxRetrievalStrategy‚ÄãThis will return the top k most similar vectors to the query vector. The k parameter is set when the ElasticsearchStore is initialized. The default value is 10.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ApproxRetrievalStrategy())docs = db.similarity_search(query="What did the president say about Ketanji Brown Jackson?", k=10)Example: Approx with hybrid‚ÄãThis example will show how to configure ElasticsearchStore to perform a hybrid retrieval, using a combination of approximate semantic search and keyword based search. We use RRF to balance the two scores from different retrieval methods.To enable hybrid retrieval, we need to set hybrid=True in ElasticsearchStore ApproxRetrievalStrategy constructor.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ApproxRetrievalStrategy( hybrid=True, ))When hybrid is enabled, the query performed will be a combination of approximate semantic |
1,986 | will be a combination of approximate semantic search and keyword based search. It will use rrf (Reciprocal Rank Fusion) to balance the two scores from different retrieval methods.Note RRF requires Elasticsearch 8.9.0 or above.{ "knn": { "field": "vector", "filter": [], "k": 1, "num_candidates": 50, "query_vector": [1.0, ..., 0.0], }, "query": { "bool": { "filter": [], "must": [{"match": {"text": {"query": "foo"}}}], } }, "rank": {"rrf": {}},}Example: Approx with Embedding Model in Elasticsearch‚ÄãThis example will show how to configure ElasticsearchStore to use the embedding model deployed in Elasticsearch for approximate retrieval. To use this, specify the model_id in ElasticsearchStore ApproxRetrievalStrategy constructor via the query_model_id argument.NOTE This requires the model to be deployed and running in Elasticsearch ml node. See notebook example on how to deploy the model with eland.APPROX_SELF_DEPLOYED_INDEX_NAME = "test-approx-self-deployed"# Note: This does not have an embedding function specified# Instead, we will use the embedding model deployed in Elasticsearchdb = ElasticsearchStore( es_cloud_id="<your cloud id>", es_user="elastic", es_password="<your password>", index_name=APPROX_SELF_DEPLOYED_INDEX_NAME, query_field="text_field", vector_query_field="vector_query_field.predicted_value", strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="sentence-transformers__all-minilm-l6-v2" ))# Setup a Ingest Pipeline to perform the embedding# of the text fielddb.client.ingest.put_pipeline( id="test_pipeline", processors=[ { "inference": { "model_id": "sentence-transformers__all-minilm-l6-v2", "field_map": {"query_field": "text_field"}, "target_field": "vector_query_field", } } ],)# creating a new index with the pipeline,# not relying on | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: will be a combination of approximate semantic search and keyword based search. It will use rrf (Reciprocal Rank Fusion) to balance the two scores from different retrieval methods.Note RRF requires Elasticsearch 8.9.0 or above.{ "knn": { "field": "vector", "filter": [], "k": 1, "num_candidates": 50, "query_vector": [1.0, ..., 0.0], }, "query": { "bool": { "filter": [], "must": [{"match": {"text": {"query": "foo"}}}], } }, "rank": {"rrf": {}},}Example: Approx with Embedding Model in Elasticsearch‚ÄãThis example will show how to configure ElasticsearchStore to use the embedding model deployed in Elasticsearch for approximate retrieval. To use this, specify the model_id in ElasticsearchStore ApproxRetrievalStrategy constructor via the query_model_id argument.NOTE This requires the model to be deployed and running in Elasticsearch ml node. See notebook example on how to deploy the model with eland.APPROX_SELF_DEPLOYED_INDEX_NAME = "test-approx-self-deployed"# Note: This does not have an embedding function specified# Instead, we will use the embedding model deployed in Elasticsearchdb = ElasticsearchStore( es_cloud_id="<your cloud id>", es_user="elastic", es_password="<your password>", index_name=APPROX_SELF_DEPLOYED_INDEX_NAME, query_field="text_field", vector_query_field="vector_query_field.predicted_value", strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="sentence-transformers__all-minilm-l6-v2" ))# Setup a Ingest Pipeline to perform the embedding# of the text fielddb.client.ingest.put_pipeline( id="test_pipeline", processors=[ { "inference": { "model_id": "sentence-transformers__all-minilm-l6-v2", "field_map": {"query_field": "text_field"}, "target_field": "vector_query_field", } } ],)# creating a new index with the pipeline,# not relying on |
1,987 | a new index with the pipeline,# not relying on langchain to create the indexdb.client.indices.create( index=APPROX_SELF_DEPLOYED_INDEX_NAME, mappings={ "properties": { "text_field": {"type": "text"}, "vector_query_field": { "properties": { "predicted_value": { "type": "dense_vector", "dims": 384, "index": True, "similarity": "l2_norm", } } }, } }, settings={"index": {"default_pipeline": "test_pipeline"}},)db.from_texts(["hello world"], es_cloud_id="<cloud id>", es_user="elastic", es_password="<cloud password>", index_name=APPROX_SELF_DEPLOYED_INDEX_NAME, query_field="text_field", vector_query_field="vector_query_field.predicted_value", strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="sentence-transformers__all-minilm-l6-v2" ))# Perform searchdb.similarity_search("hello world", k=10)SparseVectorRetrievalStrategy (ELSER)‚ÄãThis strategy uses Elasticsearch's sparse vector retrieval to retrieve the top-k results. We only support our own "ELSER" embedding model for now.NOTE This requires the ELSER model to be deployed and running in Elasticsearch ml node. To use this, specify SparseVectorRetrievalStrategy in ElasticsearchStore constructor.# Note that this example doesn't have an embedding function. This is because we infer the tokens at index time and at query time within Elasticsearch. # This requires the ELSER model to be loaded and running in Elasticsearch.db = ElasticsearchStore.from_documents( docs, es_cloud_id="My_deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmVzLmlvOjQ0MyQ2OGJhMjhmNDc1M2Y0MWVjYTk2NzI2ZWNkMmE5YzRkNyQ3NWI4ODRjNWQ2OTU0MTYzODFjOTkxNmQ1YzYxMGI1Mw==", es_user="elastic", es_password="GgUPiWKwEzgHIYdHdgPk1Lwi", index_name="test-elser", | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: a new index with the pipeline,# not relying on langchain to create the indexdb.client.indices.create( index=APPROX_SELF_DEPLOYED_INDEX_NAME, mappings={ "properties": { "text_field": {"type": "text"}, "vector_query_field": { "properties": { "predicted_value": { "type": "dense_vector", "dims": 384, "index": True, "similarity": "l2_norm", } } }, } }, settings={"index": {"default_pipeline": "test_pipeline"}},)db.from_texts(["hello world"], es_cloud_id="<cloud id>", es_user="elastic", es_password="<cloud password>", index_name=APPROX_SELF_DEPLOYED_INDEX_NAME, query_field="text_field", vector_query_field="vector_query_field.predicted_value", strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="sentence-transformers__all-minilm-l6-v2" ))# Perform searchdb.similarity_search("hello world", k=10)SparseVectorRetrievalStrategy (ELSER)‚ÄãThis strategy uses Elasticsearch's sparse vector retrieval to retrieve the top-k results. We only support our own "ELSER" embedding model for now.NOTE This requires the ELSER model to be deployed and running in Elasticsearch ml node. To use this, specify SparseVectorRetrievalStrategy in ElasticsearchStore constructor.# Note that this example doesn't have an embedding function. This is because we infer the tokens at index time and at query time within Elasticsearch. # This requires the ELSER model to be loaded and running in Elasticsearch.db = ElasticsearchStore.from_documents( docs, es_cloud_id="My_deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmVzLmlvOjQ0MyQ2OGJhMjhmNDc1M2Y0MWVjYTk2NzI2ZWNkMmE5YzRkNyQ3NWI4ODRjNWQ2OTU0MTYzODFjOTkxNmQ1YzYxMGI1Mw==", es_user="elastic", es_password="GgUPiWKwEzgHIYdHdgPk1Lwi", index_name="test-elser", |
1,988 | index_name="test-elser", strategy=ElasticsearchStore.SparseVectorRetrievalStrategy())db.client.indices.refresh(index="test-elser")results = db.similarity_search("What did the president say about Ketanji Brown Jackson", k=4)print(results[0]) page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.' metadata={'source': '../../modules/state_of_the_union.txt'}ExactRetrievalStrategy​This strategy uses Elasticsearch's exact retrieval (also known as brute force) to retrieve the top-k results.To use this, specify ExactRetrievalStrategy in ElasticsearchStore constructor.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ExactRetrievalStrategy())Customise the Query​With custom_query parameter at search, you are able to adjust the query that is used to retrieve documents from Elasticsearch. This is useful if you want to want to use a more complex query, to support linear boosting of fields.# Example of a custom query thats just doing a BM25 search on the text field.def custom_query(query_body: dict, query: str): """Custom query to be used in Elasticsearch. Args: query_body (dict): Elasticsearch query body. query (str): Query string. Returns: dict: Elasticsearch query body. """ print("Query Retriever created by the retrieval strategy:") print(query_body) print() new_query_body = { "query": { "match": { "text": query } } } print("Query thats actually used in Elasticsearch:") print(new_query_body) print() return new_query_bodyresults = db.similarity_search("What | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: index_name="test-elser", strategy=ElasticsearchStore.SparseVectorRetrievalStrategy())db.client.indices.refresh(index="test-elser")results = db.similarity_search("What did the president say about Ketanji Brown Jackson", k=4)print(results[0]) page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.' metadata={'source': '../../modules/state_of_the_union.txt'}ExactRetrievalStrategy​This strategy uses Elasticsearch's exact retrieval (also known as brute force) to retrieve the top-k results.To use this, specify ExactRetrievalStrategy in ElasticsearchStore constructor.db = ElasticsearchStore.from_documents( docs, embeddings, es_url="http://localhost:9200", index_name="test", strategy=ElasticsearchStore.ExactRetrievalStrategy())Customise the Query​With custom_query parameter at search, you are able to adjust the query that is used to retrieve documents from Elasticsearch. This is useful if you want to want to use a more complex query, to support linear boosting of fields.# Example of a custom query thats just doing a BM25 search on the text field.def custom_query(query_body: dict, query: str): """Custom query to be used in Elasticsearch. Args: query_body (dict): Elasticsearch query body. query (str): Query string. Returns: dict: Elasticsearch query body. """ print("Query Retriever created by the retrieval strategy:") print(query_body) print() new_query_body = { "query": { "match": { "text": query } } } print("Query thats actually used in Elasticsearch:") print(new_query_body) print() return new_query_bodyresults = db.similarity_search("What |
1,989 | = db.similarity_search("What did the president say about Ketanji Brown Jackson", k=4, custom_query=custom_query)print("Results:")print(results[0]) Query Retriever created by the retrieval strategy: {'query': {'bool': {'must': [{'text_expansion': {'vector.tokens': {'model_id': '.elser_model_1', 'model_text': 'What did the president say about Ketanji Brown Jackson'}}}], 'filter': []}}} Query thats actually used in Elasticsearch: {'query': {'match': {'text': 'What did the president say about Ketanji Brown Jackson'}}} Results: page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.' metadata={'source': '../../modules/state_of_the_union.txt'}FAQQuestion: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?​One possible issue is your documents might take longer to index into Elasticsearch. ElasticsearchStore uses the Elasticsearch bulk API which has a few defaults that you can adjust to reduce the chance of timeout errors.This is also a good idea when you're using SparseVectorRetrievalStrategy.The defaults are:chunk_size: 500max_chunk_bytes: 100MBTo adjust these, you can pass in the chunk_size and max_chunk_bytes parameters to the ElasticsearchStore add_texts method. vector_store.add_texts( texts, bulk_kwargs={ "chunk_size": 50, "max_chunk_bytes": 200000000 } )Upgrading to ElasticsearchStoreIf you're already using Elasticsearch in your langchain based project, you may be using the old implementations: ElasticVectorSearch and ElasticKNNSearch which are now deprecated. We've introduced a new implementation called ElasticsearchStore which is more flexible and easier to | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: = db.similarity_search("What did the president say about Ketanji Brown Jackson", k=4, custom_query=custom_query)print("Results:")print(results[0]) Query Retriever created by the retrieval strategy: {'query': {'bool': {'must': [{'text_expansion': {'vector.tokens': {'model_id': '.elser_model_1', 'model_text': 'What did the president say about Ketanji Brown Jackson'}}}], 'filter': []}}} Query thats actually used in Elasticsearch: {'query': {'match': {'text': 'What did the president say about Ketanji Brown Jackson'}}} Results: page_content='One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.' metadata={'source': '../../modules/state_of_the_union.txt'}FAQQuestion: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?​One possible issue is your documents might take longer to index into Elasticsearch. ElasticsearchStore uses the Elasticsearch bulk API which has a few defaults that you can adjust to reduce the chance of timeout errors.This is also a good idea when you're using SparseVectorRetrievalStrategy.The defaults are:chunk_size: 500max_chunk_bytes: 100MBTo adjust these, you can pass in the chunk_size and max_chunk_bytes parameters to the ElasticsearchStore add_texts method. vector_store.add_texts( texts, bulk_kwargs={ "chunk_size": 50, "max_chunk_bytes": 200000000 } )Upgrading to ElasticsearchStoreIf you're already using Elasticsearch in your langchain based project, you may be using the old implementations: ElasticVectorSearch and ElasticKNNSearch which are now deprecated. We've introduced a new implementation called ElasticsearchStore which is more flexible and easier to |
1,990 | which is more flexible and easier to use. This notebook will guide you through the process of upgrading to the new implementation.What's new?‚ÄãThe new implementation is now one class called ElasticsearchStore which can be used for approx, exact, and ELSER search retrieval, via strategies.Im using ElasticKNNSearch‚ÄãOld implementation:from langchain.vectorstores.elastic_vector_search import ElasticKNNSearchdb = ElasticKNNSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding)New implementation:from langchain.vectorstores.elasticsearch import ElasticsearchStoredb = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, # if you use the model_id # strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="test_model" ) # if you use hybrid search # strategy=ElasticsearchStore.ApproxRetrievalStrategy( hybrid=True ))Im using ElasticVectorSearch‚ÄãOld implementation:from langchain.vectorstores.elastic_vector_search import ElasticVectorSearchdb = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding)New implementation:from langchain.vectorstores.elasticsearch import ElasticsearchStoredb = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, strategy=ElasticsearchStore.ExactRetrievalStrategy())db.client.indices.delete(index='test-metadata, test-elser, test-basic', ignore_unavailable=True, allow_no_indices=True) ObjectApiResponse({'acknowledged': True})PreviousDocArray InMemorySearchNextEpsillaRunning and connecting to ElasticsearchRunning Elasticsearch via DockerAuthenticationElastic CloudBasic ExampleFiltering MetadataExample: Filter by keywordExample: Filter by Date RangeExample: Filter by Numeric RangeExample: Filter by Geo DistanceApproxRetrievalStrategyExample: Approx with hybridExample: Approx with Embedding Model in | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: which is more flexible and easier to use. This notebook will guide you through the process of upgrading to the new implementation.What's new?‚ÄãThe new implementation is now one class called ElasticsearchStore which can be used for approx, exact, and ELSER search retrieval, via strategies.Im using ElasticKNNSearch‚ÄãOld implementation:from langchain.vectorstores.elastic_vector_search import ElasticKNNSearchdb = ElasticKNNSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding)New implementation:from langchain.vectorstores.elasticsearch import ElasticsearchStoredb = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, # if you use the model_id # strategy=ElasticsearchStore.ApproxRetrievalStrategy( query_model_id="test_model" ) # if you use hybrid search # strategy=ElasticsearchStore.ApproxRetrievalStrategy( hybrid=True ))Im using ElasticVectorSearch‚ÄãOld implementation:from langchain.vectorstores.elastic_vector_search import ElasticVectorSearchdb = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding)New implementation:from langchain.vectorstores.elasticsearch import ElasticsearchStoredb = ElasticsearchStore( es_url="http://localhost:9200", index_name="test_index", embedding=embedding, strategy=ElasticsearchStore.ExactRetrievalStrategy())db.client.indices.delete(index='test-metadata, test-elser, test-basic', ignore_unavailable=True, allow_no_indices=True) ObjectApiResponse({'acknowledged': True})PreviousDocArray InMemorySearchNextEpsillaRunning and connecting to ElasticsearchRunning Elasticsearch via DockerAuthenticationElastic CloudBasic ExampleFiltering MetadataExample: Filter by keywordExample: Filter by Date RangeExample: Filter by Numeric RangeExample: Filter by Geo DistanceApproxRetrievalStrategyExample: Approx with hybridExample: Approx with Embedding Model in |
1,991 | hybridExample: Approx with Embedding Model in ElasticsearchSparseVectorRetrievalStrategy (ELSER)ExactRetrievalStrategyCustomise the QueryQuestion: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?What's new?Im using ElasticKNNSearchIm using ElasticVectorSearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. | Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library. ->: hybridExample: Approx with Embedding Model in ElasticsearchSparseVectorRetrievalStrategy (ELSER)ExactRetrievalStrategyCustomise the QueryQuestion: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?What's new?Im using ElasticKNNSearchIm using ElasticVectorSearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,992 | MongoDB Atlas | ü¶úÔ∏èüîó Langchain | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. ->: MongoDB Atlas | ü¶úÔ∏èüîó Langchain |
1,993 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMongoDB AtlasMongoDB AtlasMongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data.This notebook shows how to use MongoDB Atlas Vector Search to store your embeddings in MongoDB documents, create a vector search index, and perform KNN search with an approximate nearest neighbor algorithm.It uses the knnBeta Operator available in MongoDB Atlas Search. This feature is in Public Preview and available for evaluation purposes, to validate functionality, and to gather feedback from public preview users. It is not recommended for production deployments as we may introduce breaking changes.To use MongoDB Atlas, you must first deploy a cluster. We have a Forever-Free tier of clusters available. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMongoDB AtlasMongoDB AtlasMongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data.This notebook shows how to use MongoDB Atlas Vector Search to store your embeddings in MongoDB documents, create a vector search index, and perform KNN search with an approximate nearest neighbor algorithm.It uses the knnBeta Operator available in MongoDB Atlas Search. This feature is in Public Preview and available for evaluation purposes, to validate functionality, and to gather feedback from public preview users. It is not recommended for production deployments as we may introduce breaking changes.To use MongoDB Atlas, you must first deploy a cluster. We have a Forever-Free tier of clusters available. |
1,994 | To get started head over to Atlas here: quick start.pip install pymongoimport osimport getpassMONGODB_ATLAS_CLUSTER_URI = getpass.getpass("MongoDB Atlas Cluster URI:")We want to use OpenAIEmbeddings so we need to set up our OpenAI API Key. os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Now, let's create a vector search index on your cluster. In the below example, embedding is the name of the field that contains the embedding vector. Please refer to the documentation to get more details on how to define an Atlas Vector Search index. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. ->: To get started head over to Atlas here: quick start.pip install pymongoimport osimport getpassMONGODB_ATLAS_CLUSTER_URI = getpass.getpass("MongoDB Atlas Cluster URI:")We want to use OpenAIEmbeddings so we need to set up our OpenAI API Key. os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Now, let's create a vector search index on your cluster. In the below example, embedding is the name of the field that contains the embedding vector. Please refer to the documentation to get more details on how to define an Atlas Vector Search index. |
1,995 | You can name the index langchain_demo and create the index on the namespace lanchain_db.langchain_col. Finally, write the following definition in the JSON editor on MongoDB Atlas:{ "mappings": { "dynamic": true, "fields": { "embedding": { "dimensions": 1536, "similarity": "cosine", "type": "knnVector" } } }}from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MongoDBAtlasVectorSearchfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()from pymongo import MongoClient# initialize MongoDB python clientclient = MongoClient(MONGODB_ATLAS_CLUSTER_URI)db_name = "langchain_db"collection_name = "langchain_col"collection = client[db_name][collection_name]index_name = "langchain_demo"# insert the documents in MongoDB Atlas with their embeddingdocsearch = MongoDBAtlasVectorSearch.from_documents( docs, embeddings, collection=collection, index_name=index_name)# perform a similarity search between the embedding of the query and the embeddings of the documentsquery = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)You can also instantiate the vector store directly and execute a query as follows:# initialize vector storevectorstore = MongoDBAtlasVectorSearch( collection, OpenAIEmbeddings(), index_name=index_name)# perform a similarity search between a query and the ingested documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content)PreviousMomento Vector Index (MVI)NextMyScaleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. ->: You can name the index langchain_demo and create the index on the namespace lanchain_db.langchain_col. Finally, write the following definition in the JSON editor on MongoDB Atlas:{ "mappings": { "dynamic": true, "fields": { "embedding": { "dimensions": 1536, "similarity": "cosine", "type": "knnVector" } } }}from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MongoDBAtlasVectorSearchfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()from pymongo import MongoClient# initialize MongoDB python clientclient = MongoClient(MONGODB_ATLAS_CLUSTER_URI)db_name = "langchain_db"collection_name = "langchain_col"collection = client[db_name][collection_name]index_name = "langchain_demo"# insert the documents in MongoDB Atlas with their embeddingdocsearch = MongoDBAtlasVectorSearch.from_documents( docs, embeddings, collection=collection, index_name=index_name)# perform a similarity search between the embedding of the query and the embeddings of the documentsquery = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)You can also instantiate the vector store directly and execute a query as follows:# initialize vector storevectorstore = MongoDBAtlasVectorSearch( collection, OpenAIEmbeddings(), index_name=index_name)# perform a similarity search between a query and the ingested documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content)PreviousMomento Vector Index (MVI)NextMyScaleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, |
1,996 | © 2023 LangChain, Inc. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. | MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data. ->: © 2023 LangChain, Inc. |
1,997 | Azure Cognitive Search | ü¶úÔ∏èüîó Langchain | Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. | Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: Azure Cognitive Search | ü¶úÔ∏èüîó Langchain |
1,998 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAzure Cognitive SearchOn this pageAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Vector search is currently in public preview. It's available through the Azure portal, preview REST API and beta client libraries. More info Beta client libraries are subject to potential breaking changes, please be sure to use the SDK package version identified below. azure-search-documents==11.4.0b8Install Azure Cognitive Search SDKpip install azure-search-documents==11.4.0b8pip install azure-identityImport required libraries‚Äãimport openaiimport osfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores.azuresearch import AzureSearchConfigure OpenAI settings‚ÄãConfigure the OpenAI settings to use Azure OpenAI or OpenAIos.environ["OPENAI_API_TYPE"] = | Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. | Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAzure Cognitive SearchOn this pageAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Vector search is currently in public preview. It's available through the Azure portal, preview REST API and beta client libraries. More info Beta client libraries are subject to potential breaking changes, please be sure to use the SDK package version identified below. azure-search-documents==11.4.0b8Install Azure Cognitive Search SDKpip install azure-search-documents==11.4.0b8pip install azure-identityImport required libraries‚Äãimport openaiimport osfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores.azuresearch import AzureSearchConfigure OpenAI settings‚ÄãConfigure the OpenAI settings to use Azure OpenAI or OpenAIos.environ["OPENAI_API_TYPE"] = |
1,999 | OpenAI or OpenAIos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "YOUR_OPENAI_ENDPOINT"os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"os.environ["OPENAI_API_VERSION"] = "2023-05-15"model: str = "text-embedding-ada-002"Configure vector store settings​Set up the vector store settings using environment variables:vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"Create embeddings and vector store instances​Create instances of the OpenAIEmbeddings and AzureSearch classes:embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)index_name: str = "langchain-vector-demo"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query,)Insert text and embeddings into vector store​Add texts and metadata from the JSON data to the vector store:from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt", encoding="utf-8")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)vector_store.add_documents(documents=docs)Perform a vector similarity search​Execute a pure vector similarity search using the similarity_search() method:# Perform a similarity searchdocs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, search_type="similarity",)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen | Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. | Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: OpenAI or OpenAIos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "YOUR_OPENAI_ENDPOINT"os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"os.environ["OPENAI_API_VERSION"] = "2023-05-15"model: str = "text-embedding-ada-002"Configure vector store settings​Set up the vector store settings using environment variables:vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"Create embeddings and vector store instances​Create instances of the OpenAIEmbeddings and AzureSearch classes:embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)index_name: str = "langchain-vector-demo"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query,)Insert text and embeddings into vector store​Add texts and metadata from the JSON data to the vector store:from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt", encoding="utf-8")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)vector_store.add_documents(documents=docs)Perform a vector similarity search​Execute a pure vector similarity search using the similarity_search() method:# Perform a similarity searchdocs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, search_type="similarity",)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen |
Subsets and Splits