Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
1,200
loading ..<span style="color: #800080; text-decoration-color: #800080">/../../../../tests/integration_tests/examples/</span><span style="color: #ff00ff; text-decoration-color: #ff00ff">example-non-utf8.txt</span></pre>The file example-non-utf8.txt uses a different encoding, so the load() function fails with a helpful message indicating which file failed decoding.With the default behavior of TextLoader any failure to load any of the documents will fail the whole loading process and no documents are loaded.B. Silent fail​We can pass the parameter silent_errors to the DirectoryLoader to skip the files which could not be loaded and continue the load process.loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, silent_errors=True)docs = loader.load() Error loading ../../../../../tests/integration_tests/examples/example-non-utf8.txtdoc_sources = [doc.metadata['source'] for doc in docs]doc_sources ['../../../../../tests/integration_tests/examples/whatsapp_chat.txt', '../../../../../tests/integration_tests/examples/example-utf8.txt']C. Auto detect encodings​We can also ask TextLoader to auto detect the file encoding before failing, by passing the autodetect_encoding to the loader class.text_loader_kwargs={'autodetect_encoding': True}loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)docs = loader.load()doc_sources = [doc.metadata['source'] for doc in docs]doc_sources ['../../../../../tests/integration_tests/examples/example-non-utf8.txt', '../../../../../tests/integration_tests/examples/whatsapp_chat.txt', '../../../../../tests/integration_tests/examples/example-utf8.txt']PreviousCSVNextHTMLShow a progress barUse multithreadingChange loader classAuto-detect file encodings with TextLoaderA. Default BehaviorB. Silent failC. Auto detect encodingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This covers how to load all documents in a directory.
This covers how to load all documents in a directory. ->: loading ..<span style="color: #800080; text-decoration-color: #800080">/../../../../tests/integration_tests/examples/</span><span style="color: #ff00ff; text-decoration-color: #ff00ff">example-non-utf8.txt</span></pre>The file example-non-utf8.txt uses a different encoding, so the load() function fails with a helpful message indicating which file failed decoding.With the default behavior of TextLoader any failure to load any of the documents will fail the whole loading process and no documents are loaded.B. Silent fail​We can pass the parameter silent_errors to the DirectoryLoader to skip the files which could not be loaded and continue the load process.loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, silent_errors=True)docs = loader.load() Error loading ../../../../../tests/integration_tests/examples/example-non-utf8.txtdoc_sources = [doc.metadata['source'] for doc in docs]doc_sources ['../../../../../tests/integration_tests/examples/whatsapp_chat.txt', '../../../../../tests/integration_tests/examples/example-utf8.txt']C. Auto detect encodings​We can also ask TextLoader to auto detect the file encoding before failing, by passing the autodetect_encoding to the loader class.text_loader_kwargs={'autodetect_encoding': True}loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)docs = loader.load()doc_sources = [doc.metadata['source'] for doc in docs]doc_sources ['../../../../../tests/integration_tests/examples/example-non-utf8.txt', '../../../../../tests/integration_tests/examples/whatsapp_chat.txt', '../../../../../tests/integration_tests/examples/example-utf8.txt']PreviousCSVNextHTMLShow a progress barUse multithreadingChange loader classAuto-detect file encodings with TextLoaderA. Default BehaviorB. Silent failC. Auto detect encodingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,201
Parent Document Retriever | 🦜️🔗 Langchain
When splitting documents for retrieval, there are often conflicting desires:
When splitting documents for retrieval, there are often conflicting desires: ->: Parent Document Retriever | 🦜️🔗 Langchain
1,202
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversParent Document RetrieverOn this pageParent Document RetrieverWhen splitting documents for retrieval, there are often conflicting desires:You may want to have small documents, so that their embeddings can most accurately reflect their meaning. If too long, then the embeddings can lose meaning.You want to have long enough documents that the context of each chunk is retained.The ParentDocumentRetriever strikes that balance by splitting and storing small chunks of data. During retrieval, it first fetches the small chunks but then looks up the parent ids for those chunks and returns those larger documents.Note that "parent document" refers to the document that a small chunk originated from. This can either be the whole raw document OR a larger
When splitting documents for retrieval, there are often conflicting desires:
When splitting documents for retrieval, there are often conflicting desires: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversParent Document RetrieverOn this pageParent Document RetrieverWhen splitting documents for retrieval, there are often conflicting desires:You may want to have small documents, so that their embeddings can most accurately reflect their meaning. If too long, then the embeddings can lose meaning.You want to have long enough documents that the context of each chunk is retained.The ParentDocumentRetriever strikes that balance by splitting and storing small chunks of data. During retrieval, it first fetches the small chunks but then looks up the parent ids for those chunks and returns those larger documents.Note that "parent document" refers to the document that a small chunk originated from. This can either be the whole raw document OR a larger
1,203
chunk.from langchain.retrievers import ParentDocumentRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.storage import InMemoryStorefrom langchain.document_loaders import TextLoaderloaders = [ TextLoader('../../paul_graham_essay.txt'), TextLoader('../../state_of_the_union.txt'),]docs = []for l in loaders: docs.extend(l.load())Retrieving full documents​In this mode, we want to retrieve the full documents. Therefore, we only specify a child splitter.# This text splitter is used to create the child documentschild_splitter = RecursiveCharacterTextSplitter(chunk_size=400)# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="full_documents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter,)retriever.add_documents(docs, ids=None)This should yield two keys, because we added two documents.list(store.yield_keys()) ['05fe8d8a-bf60-4f87-b576-4351b23df266', '571cc9e5-9ef7-4f6c-b800-835c83a1858b']Let's now call the vector store search functionality - we should see that it returns small chunks (since we're storing the small chunks).sub_docs = vectorstore.similarity_search("justice breyer")print(sub_docs[0].page_content) Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.Let's now retrieve from the overall retriever. This should return large documents - since it returns the
When splitting documents for retrieval, there are often conflicting desires:
When splitting documents for retrieval, there are often conflicting desires: ->: chunk.from langchain.retrievers import ParentDocumentRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.storage import InMemoryStorefrom langchain.document_loaders import TextLoaderloaders = [ TextLoader('../../paul_graham_essay.txt'), TextLoader('../../state_of_the_union.txt'),]docs = []for l in loaders: docs.extend(l.load())Retrieving full documents​In this mode, we want to retrieve the full documents. Therefore, we only specify a child splitter.# This text splitter is used to create the child documentschild_splitter = RecursiveCharacterTextSplitter(chunk_size=400)# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="full_documents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter,)retriever.add_documents(docs, ids=None)This should yield two keys, because we added two documents.list(store.yield_keys()) ['05fe8d8a-bf60-4f87-b576-4351b23df266', '571cc9e5-9ef7-4f6c-b800-835c83a1858b']Let's now call the vector store search functionality - we should see that it returns small chunks (since we're storing the small chunks).sub_docs = vectorstore.similarity_search("justice breyer")print(sub_docs[0].page_content) Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.Let's now retrieve from the overall retriever. This should return large documents - since it returns the
1,204
return large documents - since it returns the documents where the smaller chunks are located.retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 38539Retrieving larger chunks​Sometimes, the full documents can be too big to want to retrieve them as is. In that case, what we really want to do is to first split the raw documents into larger chunks, and then split it into smaller chunks. We then index the smaller chunks, but on retrieval we retrieve the larger chunks (but still not the full documents).# This text splitter is used to create the parent documentsparent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)# This text splitter is used to create the child documents# It should create documents smaller than the parentchild_splitter = RecursiveCharacterTextSplitter(chunk_size=400)# The vectorstore to use to index the child chunksvectorstore = Chroma(collection_name="split_parents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter,)retriever.add_documents(docs)We can see that there are much more than two documents now - these are the larger chunks.len(list(store.yield_keys())) 66Let's make sure the underlying vector store still retrieves the small chunks.sub_docs = vectorstore.similarity_search("justice breyer")print(sub_docs[0].page_content) Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.retrieved_docs =
When splitting documents for retrieval, there are often conflicting desires:
When splitting documents for retrieval, there are often conflicting desires: ->: return large documents - since it returns the documents where the smaller chunks are located.retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 38539Retrieving larger chunks​Sometimes, the full documents can be too big to want to retrieve them as is. In that case, what we really want to do is to first split the raw documents into larger chunks, and then split it into smaller chunks. We then index the smaller chunks, but on retrieval we retrieve the larger chunks (but still not the full documents).# This text splitter is used to create the parent documentsparent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)# This text splitter is used to create the child documents# It should create documents smaller than the parentchild_splitter = RecursiveCharacterTextSplitter(chunk_size=400)# The vectorstore to use to index the child chunksvectorstore = Chroma(collection_name="split_parents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter,)retriever.add_documents(docs)We can see that there are much more than two documents now - these are the larger chunks.len(list(store.yield_keys())) 66Let's make sure the underlying vector store still retrieves the small chunks.sub_docs = vectorstore.similarity_search("justice breyer")print(sub_docs[0].page_content) Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.retrieved_docs =
1,205
the United States Supreme Court.retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 1849print(retrieved_docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re
When splitting documents for retrieval, there are often conflicting desires:
When splitting documents for retrieval, there are often conflicting desires: ->: the United States Supreme Court.retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 1849print(retrieved_docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re
1,206
have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.PreviousMultiVector RetrieverNextSelf-queryingRetrieving full documentsRetrieving larger chunksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
When splitting documents for retrieval, there are often conflicting desires:
When splitting documents for retrieval, there are often conflicting desires: ->: have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.PreviousMultiVector RetrieverNextSelf-queryingRetrieving full documentsRetrieving larger chunksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,207
Text embedding models | 🦜️🔗 Langchain
Head to Integrations for documentation on built-in integrations with text embedding model providers.
Head to Integrations for documentation on built-in integrations with text embedding model providers. ->: Text embedding models | 🦜️🔗 Langchain
1,208
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsCachingVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalText embedding modelsOn this pageText embedding modelsinfoHead to Integrations for documentation on built-in integrations with text embedding model providers.The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).Get started​Setup​To start we'll need to install the OpenAI Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.embeddings import
Head to Integrations for documentation on built-in integrations with text embedding model providers.
Head to Integrations for documentation on built-in integrations with text embedding model providers. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsCachingVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalText embedding modelsOn this pageText embedding modelsinfoHead to Integrations for documentation on built-in integrations with text embedding model providers.The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).Get started​Setup​To start we'll need to install the OpenAI Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.embeddings import
1,209
OpenAI LLM class:from langchain.embeddings import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings(openai_api_key="...")Otherwise you can initialize without any params:from langchain.embeddings import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings()embed_documents​Embed list of texts​embeddings = embeddings_model.embed_documents( [ "Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!" ])len(embeddings), len(embeddings[0])(5, 1536)embed_query​Embed single query​Embed a single piece of text for the purpose of comparing to other embedded pieces of texts.embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?")embedded_query[:5][0.0053587136790156364, -0.0004999046213924885, 0.038883671164512634, -0.003001077566295862, -0.00900818221271038]PreviousLost in the middle: The problem with long contextsNextCachingGet startedSetupembed_documentsembed_queryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Head to Integrations for documentation on built-in integrations with text embedding model providers.
Head to Integrations for documentation on built-in integrations with text embedding model providers. ->: OpenAI LLM class:from langchain.embeddings import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings(openai_api_key="...")Otherwise you can initialize without any params:from langchain.embeddings import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings()embed_documents​Embed list of texts​embeddings = embeddings_model.embed_documents( [ "Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!" ])len(embeddings), len(embeddings[0])(5, 1536)embed_query​Embed single query​Embed a single piece of text for the purpose of comparing to other embedded pieces of texts.embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?")embedded_query[:5][0.0053587136790156364, -0.0004999046213924885, 0.038883671164512634, -0.003001077566295862, -0.00900818221271038]PreviousLost in the middle: The problem with long contextsNextCachingGet startedSetupembed_documentsembed_queryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,210
Lost in the middle: The problem with long contexts | 🦜️🔗 Langchain
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. ->: Lost in the middle: The problem with long contexts | 🦜️🔗 Langchain
1,211
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersPost retrievalLost in the middle: The problem with long contextsText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersPost retrievalLost in the middle: The problem with long contextsLost in the middle: The problem with long contextsNo matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. In brief: When models must access relevant information in the middle of long contexts, they tend to ignore the provided documents.
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersPost retrievalLost in the middle: The problem with long contextsText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersPost retrievalLost in the middle: The problem with long contextsLost in the middle: The problem with long contextsNo matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. In brief: When models must access relevant information in the middle of long contexts, they tend to ignore the provided documents.
1,212
See: https://arxiv.org/abs/2307.03172To avoid this issue you can re-order documents after retrieval to avoid performance degradation.import osimport chromadbfrom langchain.vectorstores import Chromafrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.document_transformers import ( LongContextReorder,)from langchain.chains import StuffDocumentsChain, LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.llms import OpenAI# Get embeddings.embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")texts = [ "Basquetball is a great sport.", "Fly me to the moon is one of my favourite songs.", "The Celtics are my favourite team.", "This is a document about the Boston Celtics", "I simply love going to the movies", "The Boston Celtics won the game by 20 points", "This is just a random text.", "Elden Ring is one of the best games in the last 15 years.", "L. Kornet is one of the best Celtics players.", "Larry Bird was an iconic NBA player.",]# Create a retrieverretriever = Chroma.from_texts(texts, embedding=embeddings).as_retriever( search_kwargs={"k": 10})query = "What can you tell me about the Celtics?"# Get relevant documents ordered by relevance scoredocs = retriever.get_relevant_documents(query)docs [Document(page_content='This is a document about the Boston Celtics', metadata={}), Document(page_content='The Celtics are my favourite team.', metadata={}), Document(page_content='L. Kornet is one of the best Celtics players.', metadata={}), Document(page_content='The Boston Celtics won the game by 20 points', metadata={}), Document(page_content='Larry Bird was an iconic NBA player.', metadata={}), Document(page_content='Elden Ring is one of the best games in the last 15 years.', metadata={}), Document(page_content='Basquetball is a great sport.', metadata={}), Document(page_content='I simply love going to the movies', metadata={}), Document(page_content='Fly me to the
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. ->: See: https://arxiv.org/abs/2307.03172To avoid this issue you can re-order documents after retrieval to avoid performance degradation.import osimport chromadbfrom langchain.vectorstores import Chromafrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.document_transformers import ( LongContextReorder,)from langchain.chains import StuffDocumentsChain, LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.llms import OpenAI# Get embeddings.embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")texts = [ "Basquetball is a great sport.", "Fly me to the moon is one of my favourite songs.", "The Celtics are my favourite team.", "This is a document about the Boston Celtics", "I simply love going to the movies", "The Boston Celtics won the game by 20 points", "This is just a random text.", "Elden Ring is one of the best games in the last 15 years.", "L. Kornet is one of the best Celtics players.", "Larry Bird was an iconic NBA player.",]# Create a retrieverretriever = Chroma.from_texts(texts, embedding=embeddings).as_retriever( search_kwargs={"k": 10})query = "What can you tell me about the Celtics?"# Get relevant documents ordered by relevance scoredocs = retriever.get_relevant_documents(query)docs [Document(page_content='This is a document about the Boston Celtics', metadata={}), Document(page_content='The Celtics are my favourite team.', metadata={}), Document(page_content='L. Kornet is one of the best Celtics players.', metadata={}), Document(page_content='The Boston Celtics won the game by 20 points', metadata={}), Document(page_content='Larry Bird was an iconic NBA player.', metadata={}), Document(page_content='Elden Ring is one of the best games in the last 15 years.', metadata={}), Document(page_content='Basquetball is a great sport.', metadata={}), Document(page_content='I simply love going to the movies', metadata={}), Document(page_content='Fly me to the
1,213
Document(page_content='Fly me to the moon is one of my favourite songs.', metadata={}), Document(page_content='This is just a random text.', metadata={})]# Reorder the documents:# Less relevant document will be at the middle of the list and more# relevant elements at beginning / end.reordering = LongContextReorder()reordered_docs = reordering.transform_documents(docs)# Confirm that the 4 relevant documents are at beginning and end.reordered_docs [Document(page_content='The Celtics are my favourite team.', metadata={}), Document(page_content='The Boston Celtics won the game by 20 points', metadata={}), Document(page_content='Elden Ring is one of the best games in the last 15 years.', metadata={}), Document(page_content='I simply love going to the movies', metadata={}), Document(page_content='This is just a random text.', metadata={}), Document(page_content='Fly me to the moon is one of my favourite songs.', metadata={}), Document(page_content='Basquetball is a great sport.', metadata={}), Document(page_content='Larry Bird was an iconic NBA player.', metadata={}), Document(page_content='L. Kornet is one of the best Celtics players.', metadata={}), Document(page_content='This is a document about the Boston Celtics', metadata={})]# We prepare and run a custom Stuff chain with reordered docs as context.# Override promptsdocument_prompt = PromptTemplate( input_variables=["page_content"], template="{page_content}")document_variable_name = "context"llm = OpenAI()stuff_prompt_override = """Given this text extracts:-----{context}-----Please answer the following question:{query}"""prompt = PromptTemplate( template=stuff_prompt_override, input_variables=["context", "query"])# Instantiate the chainllm_chain = LLMChain(llm=llm, prompt=prompt)chain = StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name,)chain.run(input_documents=reordered_docs,
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. ->: Document(page_content='Fly me to the moon is one of my favourite songs.', metadata={}), Document(page_content='This is just a random text.', metadata={})]# Reorder the documents:# Less relevant document will be at the middle of the list and more# relevant elements at beginning / end.reordering = LongContextReorder()reordered_docs = reordering.transform_documents(docs)# Confirm that the 4 relevant documents are at beginning and end.reordered_docs [Document(page_content='The Celtics are my favourite team.', metadata={}), Document(page_content='The Boston Celtics won the game by 20 points', metadata={}), Document(page_content='Elden Ring is one of the best games in the last 15 years.', metadata={}), Document(page_content='I simply love going to the movies', metadata={}), Document(page_content='This is just a random text.', metadata={}), Document(page_content='Fly me to the moon is one of my favourite songs.', metadata={}), Document(page_content='Basquetball is a great sport.', metadata={}), Document(page_content='Larry Bird was an iconic NBA player.', metadata={}), Document(page_content='L. Kornet is one of the best Celtics players.', metadata={}), Document(page_content='This is a document about the Boston Celtics', metadata={})]# We prepare and run a custom Stuff chain with reordered docs as context.# Override promptsdocument_prompt = PromptTemplate( input_variables=["page_content"], template="{page_content}")document_variable_name = "context"llm = OpenAI()stuff_prompt_override = """Given this text extracts:-----{context}-----Please answer the following question:{query}"""prompt = PromptTemplate( template=stuff_prompt_override, input_variables=["context", "query"])# Instantiate the chainllm_chain = LLMChain(llm=llm, prompt=prompt)chain = StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name,)chain.run(input_documents=reordered_docs,
1,214
query=query)PreviousSplit by tokensNextText embedding modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. ->: query=query)PreviousSplit by tokensNextText embedding modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,215
Split by tokens | 🦜️🔗 Langchain
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. ->: Split by tokens | 🦜️🔗 Langchain
1,216
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersSplit by tokensOn this pageSplit by tokensLanguage models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. tiktoken​tiktoken is a fast BPE tokenizer created by OpenAI.We can use it to estimate tokens used. It will probably be more accurate for the OpenAI models.How the text is split: by character passed in.How the chunk size is measured: by tiktoken tokenizer.#!pip install tiktoken# This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import CharacterTextSplittertext_splitter = CharacterTextSplitter.from_tiktoken_encoder( chunk_size=100, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersSplit by tokensOn this pageSplit by tokensLanguage models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. tiktoken​tiktoken is a fast BPE tokenizer created by OpenAI.We can use it to estimate tokens used. It will probably be more accurate for the OpenAI models.How the text is split: by character passed in.How the chunk size is measured: by tiktoken tokenizer.#!pip install tiktoken# This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import CharacterTextSplittertext_splitter = CharacterTextSplitter.from_tiktoken_encoder( chunk_size=100, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the
1,217
With a duty to one another to the American people to the Constitution.Note that if we use CharacterTextSplitter.from_tiktoken_encoder, text is only split by CharacterTextSplitter and tiktoken tokenizer is used to merge splits. It means that split can be larger than chunk size measured by tiktoken tokenizer. We can use RecursiveCharacterTextSplitter.from_tiktoken_encoder to make sure splits are not larger than chunk size of tokens allowed by the language model, where each split will be recursively split if it has a larger size.We can also load a tiktoken splitter directly, which ensure each split is smaller than chunk size.from langchain.text_splitter import TokenTextSplittertext_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)print(texts[0])spaCy‚ÄãspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.Another alternative to NLTK is to use spaCy tokenizer.How the text is split: by spaCy tokenizer.How the chunk size is measured: by number of characters.#!pip install spacy# This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import SpacyTextSplittertext_splitter = SpacyTextSplitter(chunk_size=1000)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. ->: With a duty to one another to the American people to the Constitution.Note that if we use CharacterTextSplitter.from_tiktoken_encoder, text is only split by CharacterTextSplitter and tiktoken tokenizer is used to merge splits. It means that split can be larger than chunk size measured by tiktoken tokenizer. We can use RecursiveCharacterTextSplitter.from_tiktoken_encoder to make sure splits are not larger than chunk size of tokens allowed by the language model, where each split will be recursively split if it has a larger size.We can also load a tiktoken splitter directly, which ensure each split is smaller than chunk size.from langchain.text_splitter import TokenTextSplittertext_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)print(texts[0])spaCy‚ÄãspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.Another alternative to NLTK is to use spaCy tokenizer.How the text is split: by spaCy tokenizer.How the chunk size is measured: by number of characters.#!pip install spacy# This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import SpacyTextSplittertext_splitter = SpacyTextSplitter(chunk_size=1000)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always
1,218
an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.SentenceTransformers​The SentenceTransformersTokenTextSplitter is a specialized text splitter for use with the sentence-transformer models. The default behaviour is to split the text into chunks that fit the token window of the sentence transformer model that you would like to use.from langchain.text_splitter import SentenceTransformersTokenTextSplittersplitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0)text = "Lorem "count_start_and_stop_tokens = 2text_token_count = splitter.count_tokens(text=text) - count_start_and_stop_tokensprint(text_token_count) 2token_multiplier = splitter.maximum_tokens_per_chunk // text_token_count + 1# `text_to_split` does not fit in a single chunktext_to_split = text * token_multiplierprint(f"tokens in text to split: {splitter.count_tokens(text=text_to_split)}") tokens in text to split: 514text_chunks = splitter.split_text(text=text_to_split)print(text_chunks[1]) loremNLTK​The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language.Rather than just splitting on "\n\n", we can use NLTK to split based on NLTK tokenizers.How the text is split: by NLTK tokenizer.How the chunk size is measured: by number of characters.# pip install nltk# This is a long document we can split up.with
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. ->: an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.SentenceTransformers​The SentenceTransformersTokenTextSplitter is a specialized text splitter for use with the sentence-transformer models. The default behaviour is to split the text into chunks that fit the token window of the sentence transformer model that you would like to use.from langchain.text_splitter import SentenceTransformersTokenTextSplittersplitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0)text = "Lorem "count_start_and_stop_tokens = 2text_token_count = splitter.count_tokens(text=text) - count_start_and_stop_tokensprint(text_token_count) 2token_multiplier = splitter.maximum_tokens_per_chunk // text_token_count + 1# `text_to_split` does not fit in a single chunktext_to_split = text * token_multiplierprint(f"tokens in text to split: {splitter.count_tokens(text=text_to_split)}") tokens in text to split: 514text_chunks = splitter.split_text(text=text_to_split)print(text_chunks[1]) loremNLTK​The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language.Rather than just splitting on "\n\n", we can use NLTK to split based on NLTK tokenizers.How the text is split: by NLTK tokenizer.How the chunk size is measured: by number of characters.# pip install nltk# This is a long document we can split up.with
1,219
This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import NLTKTextSplittertext_splitter = NLTKTextSplitter(chunk_size=1000)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies.Hugging Face tokenizer​Hugging Face has many tokenizers.We use Hugging Face tokenizer, the GPT2TokenizerFast to count the text length in tokens.How the text is split: by character passed in.How the chunk size is measured: by number of tokens calculated by the Hugging Face tokenizer.from transformers import GPT2TokenizerFasttokenizer = GPT2TokenizerFast.from_pretrained("gpt2")# This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import CharacterTextSplittertext_splitter =
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. ->: This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import NLTKTextSplittertext_splitter = NLTKTextSplitter(chunk_size=1000)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies.Hugging Face tokenizer​Hugging Face has many tokenizers.We use Hugging Face tokenizer, the GPT2TokenizerFast to count the text length in tokens.How the text is split: by character passed in.How the chunk size is measured: by number of tokens calculated by the Hugging Face tokenizer.from transformers import GPT2TokenizerFasttokenizer = GPT2TokenizerFast.from_pretrained("gpt2")# This is a long document we can split up.with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.text_splitter import CharacterTextSplittertext_splitter =
1,220
import CharacterTextSplittertext_splitter = CharacterTextSplitter.from_huggingface_tokenizer( tokenizer, chunk_size=100, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution.PreviousRecursively split by characterNextLost in the middle: The problem with long contextstiktokenspaCySentenceTransformersNLTKHugging Face tokenizerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. ->: import CharacterTextSplittertext_splitter = CharacterTextSplitter.from_huggingface_tokenizer( tokenizer, chunk_size=100, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution.PreviousRecursively split by characterNextLost in the middle: The problem with long contextstiktokenspaCySentenceTransformersNLTKHugging Face tokenizerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,221
Split code | 🦜️🔗 Langchain
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: Split code | 🦜️🔗 Langchain
1,222
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersSplit codeOn this pageSplit codeCodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language,)# Full list of support languages[e.value for e in Language] ['cpp', 'go', 'java', 'kotlin', 'js', 'ts', 'php', 'proto', 'python', 'rst', 'ruby', 'rust', 'scala', 'swift', 'markdown', 'latex', 'html', 'sol', 'csharp']# You can also see the separators used for a given languageRecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON) ['\nclass ', '\ndef ', '\n\tdef ', '\n\n', '\n', ' ', '']Python​Here's an example using the PythonTextSplitter:PYTHON_CODE = """def hello_world(): print("Hello, World!")# Call the functionhello_world()"""python_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PYTHON, chunk_size=50, chunk_overlap=0)python_docs = python_splitter.create_documents([PYTHON_CODE])python_docs [Document(page_content='def hello_world():\n print("Hello, World!")', metadata={}), Document(page_content='# Call the function\nhello_world()', metadata={})]JS​Here's an example using the JS text splitter:JS_CODE = """function helloWorld() { console.log("Hello,
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersSplit codeOn this pageSplit codeCodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language,)# Full list of support languages[e.value for e in Language] ['cpp', 'go', 'java', 'kotlin', 'js', 'ts', 'php', 'proto', 'python', 'rst', 'ruby', 'rust', 'scala', 'swift', 'markdown', 'latex', 'html', 'sol', 'csharp']# You can also see the separators used for a given languageRecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON) ['\nclass ', '\ndef ', '\n\tdef ', '\n\n', '\n', ' ', '']Python​Here's an example using the PythonTextSplitter:PYTHON_CODE = """def hello_world(): print("Hello, World!")# Call the functionhello_world()"""python_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PYTHON, chunk_size=50, chunk_overlap=0)python_docs = python_splitter.create_documents([PYTHON_CODE])python_docs [Document(page_content='def hello_world():\n print("Hello, World!")', metadata={}), Document(page_content='# Call the function\nhello_world()', metadata={})]JS​Here's an example using the JS text splitter:JS_CODE = """function helloWorld() { console.log("Hello,
1,223
= """function helloWorld() { console.log("Hello, World!");}// Call the functionhelloWorld();"""js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)js_docs = js_splitter.create_documents([JS_CODE])js_docs [Document(page_content='function helloWorld() {\n console.log("Hello, World!");\n}', metadata={}), Document(page_content='// Call the function\nhelloWorld();', metadata={})]TS​Here's an example using the TS text splitter:TS_CODE = """function helloWorld(): void { console.log("Hello, World!");}// Call the functionhelloWorld();"""ts_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.TS, chunk_size=60, chunk_overlap=0)ts_docs = ts_splitter.create_documents([TS_CODE])ts_docs [Document(page_content='function helloWorld(): void {\n console.log("Hello, World!");\n}', metadata={}), Document(page_content='// Call the function\nhelloWorld();', metadata={})]Markdown​Here's an example using the Markdown text splitter:markdown_text = """# 🦜️🔗 LangChain⚡ Building applications with LLMs through composability ⚡## Quick Install```bash# Hopefully this code block isn't splitpip install langchain```As an open-source project in a rapidly developing field, we are extremely open to contributions."""md_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)md_docs = md_splitter.create_documents([markdown_text])md_docs [Document(page_content='# 🦜️🔗 LangChain', metadata={}), Document(page_content='⚡ Building applications with LLMs through composability ⚡', metadata={}), Document(page_content='## Quick Install', metadata={}), Document(page_content="```bash\n# Hopefully this code block isn't split", metadata={}), Document(page_content='pip install langchain', metadata={}), Document(page_content='```', metadata={}), Document(page_content='As an open-source project in a
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: = """function helloWorld() { console.log("Hello, World!");}// Call the functionhelloWorld();"""js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)js_docs = js_splitter.create_documents([JS_CODE])js_docs [Document(page_content='function helloWorld() {\n console.log("Hello, World!");\n}', metadata={}), Document(page_content='// Call the function\nhelloWorld();', metadata={})]TS​Here's an example using the TS text splitter:TS_CODE = """function helloWorld(): void { console.log("Hello, World!");}// Call the functionhelloWorld();"""ts_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.TS, chunk_size=60, chunk_overlap=0)ts_docs = ts_splitter.create_documents([TS_CODE])ts_docs [Document(page_content='function helloWorld(): void {\n console.log("Hello, World!");\n}', metadata={}), Document(page_content='// Call the function\nhelloWorld();', metadata={})]Markdown​Here's an example using the Markdown text splitter:markdown_text = """# 🦜️🔗 LangChain⚡ Building applications with LLMs through composability ⚡## Quick Install```bash# Hopefully this code block isn't splitpip install langchain```As an open-source project in a rapidly developing field, we are extremely open to contributions."""md_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)md_docs = md_splitter.create_documents([markdown_text])md_docs [Document(page_content='# 🦜️🔗 LangChain', metadata={}), Document(page_content='⚡ Building applications with LLMs through composability ⚡', metadata={}), Document(page_content='## Quick Install', metadata={}), Document(page_content="```bash\n# Hopefully this code block isn't split", metadata={}), Document(page_content='pip install langchain', metadata={}), Document(page_content='```', metadata={}), Document(page_content='As an open-source project in a
1,224
an open-source project in a rapidly developing field, we', metadata={}), Document(page_content='are extremely open to contributions.', metadata={})]Latex‚ÄãHere's an example on Latex text:latex_text = """\documentclass{article}\begin{document}\maketitle\section{Introduction}Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\subsection{History of LLMs}The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\subsection{Applications of LLMs}LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\end{document}"""latex_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)latex_docs = latex_splitter.create_documents([latex_text])latex_docs [Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', metadata={}), Document(page_content='\\section{Introduction}', metadata={}), Document(page_content='Large language models (LLMs) are a type of machine learning', metadata={}), Document(page_content='model that can be trained on vast amounts of text data to', metadata={}), Document(page_content='generate human-like language. In recent years, LLMs have', metadata={}), Document(page_content='made significant advances in a variety of natural language',
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: an open-source project in a rapidly developing field, we', metadata={}), Document(page_content='are extremely open to contributions.', metadata={})]Latex‚ÄãHere's an example on Latex text:latex_text = """\documentclass{article}\begin{document}\maketitle\section{Introduction}Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\subsection{History of LLMs}The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\subsection{Applications of LLMs}LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\end{document}"""latex_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)latex_docs = latex_splitter.create_documents([latex_text])latex_docs [Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', metadata={}), Document(page_content='\\section{Introduction}', metadata={}), Document(page_content='Large language models (LLMs) are a type of machine learning', metadata={}), Document(page_content='model that can be trained on vast amounts of text data to', metadata={}), Document(page_content='generate human-like language. In recent years, LLMs have', metadata={}), Document(page_content='made significant advances in a variety of natural language',
1,225
advances in a variety of natural language', metadata={}), Document(page_content='processing tasks, including language translation, text', metadata={}), Document(page_content='generation, and sentiment analysis.', metadata={}), Document(page_content='\\subsection{History of LLMs}', metadata={}), Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,', metadata={}), Document(page_content='but they were limited by the amount of data that could be', metadata={}), Document(page_content='processed and the computational power available at the', metadata={}), Document(page_content='time. In the past decade, however, advances in hardware and', metadata={}), Document(page_content='software have made it possible to train LLMs on massive', metadata={}), Document(page_content='datasets, leading to significant improvements in', metadata={}), Document(page_content='performance.', metadata={}), Document(page_content='\\subsection{Applications of LLMs}', metadata={}), Document(page_content='LLMs have many applications in industry, including', metadata={}), Document(page_content='chatbots, content creation, and virtual assistants. They', metadata={}), Document(page_content='can also be used in academia for research in linguistics,', metadata={}), Document(page_content='psychology, and computational linguistics.', metadata={}), Document(page_content='\\end{document}', metadata={})]HTML​Here's an example using an HTML text splitter:html_text = """<!DOCTYPE html><html> <head> <title>🦜️🔗 LangChain</title> <style> body { font-family: Arial, sans-serif; } h1 { color: darkblue; } </style> </head> <body> <div> <h1>🦜️🔗 LangChain</h1> <p>⚡ Building applications with LLMs through composability ⚡</p> </div> <div> As an open-source
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: advances in a variety of natural language', metadata={}), Document(page_content='processing tasks, including language translation, text', metadata={}), Document(page_content='generation, and sentiment analysis.', metadata={}), Document(page_content='\\subsection{History of LLMs}', metadata={}), Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,', metadata={}), Document(page_content='but they were limited by the amount of data that could be', metadata={}), Document(page_content='processed and the computational power available at the', metadata={}), Document(page_content='time. In the past decade, however, advances in hardware and', metadata={}), Document(page_content='software have made it possible to train LLMs on massive', metadata={}), Document(page_content='datasets, leading to significant improvements in', metadata={}), Document(page_content='performance.', metadata={}), Document(page_content='\\subsection{Applications of LLMs}', metadata={}), Document(page_content='LLMs have many applications in industry, including', metadata={}), Document(page_content='chatbots, content creation, and virtual assistants. They', metadata={}), Document(page_content='can also be used in academia for research in linguistics,', metadata={}), Document(page_content='psychology, and computational linguistics.', metadata={}), Document(page_content='\\end{document}', metadata={})]HTML​Here's an example using an HTML text splitter:html_text = """<!DOCTYPE html><html> <head> <title>🦜️🔗 LangChain</title> <style> body { font-family: Arial, sans-serif; } h1 { color: darkblue; } </style> </head> <body> <div> <h1>🦜️🔗 LangChain</h1> <p>⚡ Building applications with LLMs through composability ⚡</p> </div> <div> As an open-source
1,226
</div> <div> As an open-source project in a rapidly developing field, we are extremely open to contributions. </div> </body></html>"""html_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.HTML, chunk_size=60, chunk_overlap=0)html_docs = html_splitter.create_documents([html_text])html_docs [Document(page_content='<!DOCTYPE html>\n<html>', metadata={}), Document(page_content='<head>\n <title>🦜️🔗 LangChain</title>', metadata={}), Document(page_content='<style>\n body {\n font-family: Aria', metadata={}), Document(page_content='l, sans-serif;\n }\n h1 {', metadata={}), Document(page_content='color: darkblue;\n }\n </style>\n </head', metadata={}), Document(page_content='>', metadata={}), Document(page_content='<body>', metadata={}), Document(page_content='<div>\n <h1>🦜️🔗 LangChain</h1>', metadata={}), Document(page_content='<p>⚡ Building applications with LLMs through composability ⚡', metadata={}), Document(page_content='</p>\n </div>', metadata={}), Document(page_content='<div>\n As an open-source project in a rapidly dev', metadata={}), Document(page_content='eloping field, we are extremely open to contributions.', metadata={}), Document(page_content='</div>\n </body>\n</html>', metadata={})]Solidity​Here's an example using the Solidity text splitter:SOL_CODE = """pragma solidity ^0.8.20;contract HelloWorld { function add(uint a, uint b) pure public returns(uint) { return a + b; }}"""sol_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.SOL, chunk_size=128, chunk_overlap=0)sol_docs = sol_splitter.create_documents([SOL_CODE])sol_docs[ Document(page_content='pragma solidity ^0.8.20;', metadata={}), Document(page_content='contract HelloWorld {\n function add(uint a, uint b) pure public
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: </div> <div> As an open-source project in a rapidly developing field, we are extremely open to contributions. </div> </body></html>"""html_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.HTML, chunk_size=60, chunk_overlap=0)html_docs = html_splitter.create_documents([html_text])html_docs [Document(page_content='<!DOCTYPE html>\n<html>', metadata={}), Document(page_content='<head>\n <title>🦜️🔗 LangChain</title>', metadata={}), Document(page_content='<style>\n body {\n font-family: Aria', metadata={}), Document(page_content='l, sans-serif;\n }\n h1 {', metadata={}), Document(page_content='color: darkblue;\n }\n </style>\n </head', metadata={}), Document(page_content='>', metadata={}), Document(page_content='<body>', metadata={}), Document(page_content='<div>\n <h1>🦜️🔗 LangChain</h1>', metadata={}), Document(page_content='<p>⚡ Building applications with LLMs through composability ⚡', metadata={}), Document(page_content='</p>\n </div>', metadata={}), Document(page_content='<div>\n As an open-source project in a rapidly dev', metadata={}), Document(page_content='eloping field, we are extremely open to contributions.', metadata={}), Document(page_content='</div>\n </body>\n</html>', metadata={})]Solidity​Here's an example using the Solidity text splitter:SOL_CODE = """pragma solidity ^0.8.20;contract HelloWorld { function add(uint a, uint b) pure public returns(uint) { return a + b; }}"""sol_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.SOL, chunk_size=128, chunk_overlap=0)sol_docs = sol_splitter.create_documents([SOL_CODE])sol_docs[ Document(page_content='pragma solidity ^0.8.20;', metadata={}), Document(page_content='contract HelloWorld {\n function add(uint a, uint b) pure public
1,227
{\n function add(uint a, uint b) pure public returns(uint) {\n return a + b;\n }\n}', metadata={})]C#‚ÄãHere's an example using the C# text splitter:using System;class Program{ static void Main() { int age = 30; // Change the age value as needed // Categorize the age without any console output if (age < 18) { // Age is under 18 } else if (age >= 18 && age < 65) { // Age is an adult } else { // Age is a senior citizen } }} [Document(page_content='using System;', metadata={}), Document(page_content='class Program\n{', metadata={}), Document(page_content='static void', metadata={}), Document(page_content='Main()', metadata={}), Document(page_content='{', metadata={}), Document(page_content='int age', metadata={}), Document(page_content='= 30; // Change', metadata={}), Document(page_content='the age value', metadata={}), Document(page_content='as needed', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Categorize the', metadata={}), Document(page_content='age without any', metadata={}), Document(page_content='console output', metadata={}), Document(page_content='if (age', metadata={}), Document(page_content='< 18)', metadata={}), Document(page_content='{', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Age is under 18', metadata={}), Document(page_content='}', metadata={}), Document(page_content='else if', metadata={}), Document(page_content='(age >= 18 &&', metadata={}), Document(page_content='age < 65)', metadata={}), Document(page_content='{', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Age is an adult', metadata={}), Document(page_content='}', metadata={}), Document(page_content='else', metadata={}), Document(page_content='{',
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: {\n function add(uint a, uint b) pure public returns(uint) {\n return a + b;\n }\n}', metadata={})]C#‚ÄãHere's an example using the C# text splitter:using System;class Program{ static void Main() { int age = 30; // Change the age value as needed // Categorize the age without any console output if (age < 18) { // Age is under 18 } else if (age >= 18 && age < 65) { // Age is an adult } else { // Age is a senior citizen } }} [Document(page_content='using System;', metadata={}), Document(page_content='class Program\n{', metadata={}), Document(page_content='static void', metadata={}), Document(page_content='Main()', metadata={}), Document(page_content='{', metadata={}), Document(page_content='int age', metadata={}), Document(page_content='= 30; // Change', metadata={}), Document(page_content='the age value', metadata={}), Document(page_content='as needed', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Categorize the', metadata={}), Document(page_content='age without any', metadata={}), Document(page_content='console output', metadata={}), Document(page_content='if (age', metadata={}), Document(page_content='< 18)', metadata={}), Document(page_content='{', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Age is under 18', metadata={}), Document(page_content='}', metadata={}), Document(page_content='else if', metadata={}), Document(page_content='(age >= 18 &&', metadata={}), Document(page_content='age < 65)', metadata={}), Document(page_content='{', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Age is an adult', metadata={}), Document(page_content='}', metadata={}), Document(page_content='else', metadata={}), Document(page_content='{',
1,228
metadata={}), Document(page_content='{', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Age is a senior', metadata={}), Document(page_content='citizen', metadata={}), Document(page_content='}\n }', metadata={}), Document(page_content='}', metadata={})]PreviousSplit by characterNextMarkdownHeaderTextSplitterPythonJSTSMarkdownLatexHTMLSolidityC#CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language.
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum Language and specify the language. ->: metadata={}), Document(page_content='{', metadata={}), Document(page_content='//', metadata={}), Document(page_content='Age is a senior', metadata={}), Document(page_content='citizen', metadata={}), Document(page_content='}\n }', metadata={}), Document(page_content='}', metadata={})]PreviousSplit by characterNextMarkdownHeaderTextSplitterPythonJSTSMarkdownLatexHTMLSolidityC#CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,229
Split by character | 🦜️🔗 Langchain
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters. ->: Split by character | 🦜️🔗 Langchain
1,230
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersSplit by characterSplit by characterThis is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.How the text is split: by single character.How the chunk size is measured: by number of characters.# This is a long document we can split up.with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read()from langchain.text_splitter import CharacterTextSplittertext_splitter = CharacterTextSplitter( separator = "\n\n", chunk_size = 1000, chunk_overlap = 200, length_function = len, is_separator_regex = False,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersSplit by characterSplit by characterThis is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.How the text is split: by single character.How the chunk size is measured: by number of characters.# This is a long document we can split up.with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read()from langchain.text_splitter import CharacterTextSplittertext_splitter = CharacterTextSplitter( separator = "\n\n", chunk_size = 1000, chunk_overlap = 200, length_function = len, is_separator_regex = False,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he
1,231
could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0Here's an example of passing metadata along with the documents, notice that it is split along with the documents.metadatas = [{"document": 1}, {"document": 2}]documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)print(documents[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0text_splitter.split_text(state_of_the_union)[0] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters. ->: could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0Here's an example of passing metadata along with the documents, notice that it is split along with the documents.metadatas = [{"document": 1}, {"document": 2}]documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)print(documents[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0text_splitter.split_text(state_of_the_union)[0] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together
1,232
kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'PreviousHTMLHeaderTextSplitterNextSplit codeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters. ->: kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'PreviousHTMLHeaderTextSplitterNextSplit codeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,233
Recursively split by character | 🦜️🔗 Langchain
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. ->: Recursively split by character | 🦜️🔗 Langchain
1,234
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersRecursively split by characterRecursively split by characterThis text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.How the text is split: by list of characters.How the chunk size is measured: by number of characters.# This is a long document we can split up.with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, is_separator_regex = False,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0])print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0 page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={}
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersRecursively split by characterRecursively split by characterThis text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.How the text is split: by list of characters.How the chunk size is measured: by number of characters.# This is a long document we can split up.with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, is_separator_regex = False,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0])print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0 page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={}
1,235
My fellow Americans.' lookup_str='' metadata={} lookup_index=0text_splitter.split_text(state_of_the_union)[:2] ['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and', 'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']PreviousMarkdownHeaderTextSplitterNextSplit by tokensCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. ->: My fellow Americans.' lookup_str='' metadata={} lookup_index=0text_splitter.split_text(state_of_the_union)[:2] ['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and', 'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']PreviousMarkdownHeaderTextSplitterNextSplit by tokensCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,236
MarkdownHeaderTextSplitter | 🦜️🔗 Langchain
Motivation
Motivation ->: MarkdownHeaderTextSplitter | 🦜️🔗 Langchain
1,237
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersMarkdownHeaderTextSplitterOn this pageMarkdownHeaderTextSplitterMotivation​Many chat or Q+A applications involve chunking input documents prior to embedding and vector storage.These notes from Pinecone provide some useful tips:When a full paragraph or document is embedded, the embedding process considers both the overall context and the relationships between the sentences and phrases within the text. This can result in a more comprehensive vector representation that captures the broader meaning and themes of the text.As mentioned, chunking often aims to keep text with common context together. With this in mind, we might want to specifically honor the structure of the document itself. For example, a markdown file is organized by headers. Creating chunks within specific header groups is an intuitive idea. To address this challenge, we can use MarkdownHeaderTextSplitter. This will split a markdown file by a specified set of headers. For example, if we want to split this markdown:md = '# Foo\n\n ## Bar\n\nHi this is Jim \nHi this is Joe\n\n ## Baz\n\n Hi this is Molly' We can specify the headers to split on:[("#", "Header 1"),("##", "Header 2")]And content is grouped or split by common headers:{'content': 'Hi this is Jim \nHi this is Joe', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Bar'}}{'content': 'Hi this is Molly', 'metadata': {'Header
Motivation
Motivation ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersMarkdownHeaderTextSplitterOn this pageMarkdownHeaderTextSplitterMotivation​Many chat or Q+A applications involve chunking input documents prior to embedding and vector storage.These notes from Pinecone provide some useful tips:When a full paragraph or document is embedded, the embedding process considers both the overall context and the relationships between the sentences and phrases within the text. This can result in a more comprehensive vector representation that captures the broader meaning and themes of the text.As mentioned, chunking often aims to keep text with common context together. With this in mind, we might want to specifically honor the structure of the document itself. For example, a markdown file is organized by headers. Creating chunks within specific header groups is an intuitive idea. To address this challenge, we can use MarkdownHeaderTextSplitter. This will split a markdown file by a specified set of headers. For example, if we want to split this markdown:md = '# Foo\n\n ## Bar\n\nHi this is Jim \nHi this is Joe\n\n ## Baz\n\n Hi this is Molly' We can specify the headers to split on:[("#", "Header 1"),("##", "Header 2")]And content is grouped or split by common headers:{'content': 'Hi this is Jim \nHi this is Joe', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Bar'}}{'content': 'Hi this is Molly', 'metadata': {'Header
1,238
'Hi this is Molly', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Baz'}}Let's have a look at some examples below.from langchain.text_splitter import MarkdownHeaderTextSplittermarkdown_document = "# Foo\n\n ## Bar\n\nHi this is Jim\n\nHi this is Joe\n\n ### Boo \n\n Hi this is Lance \n\n ## Baz\n\n Hi this is Molly"headers_to_split_on = [ ("#", "Header 1"), ("##", "Header 2"), ("###", "Header 3"),]markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)md_header_splits = markdown_splitter.split_text(markdown_document)md_header_splits [Document(page_content='Hi this is Jim \nHi this is Joe', metadata={'Header 1': 'Foo', 'Header 2': 'Bar'}), Document(page_content='Hi this is Lance', metadata={'Header 1': 'Foo', 'Header 2': 'Bar', 'Header 3': 'Boo'}), Document(page_content='Hi this is Molly', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'})]type(md_header_splits[0]) langchain.schema.document.DocumentWithin each markdown group we can then apply any text splitter we want. markdown_document = "# Intro \n\n ## History \n\n Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9] \n\n Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files. \n\n ## Rise and divergence \n\n As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \n\n additional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks. \n\n #### Standardization \n\n From 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort. \n\n ## Implementations \n\n Implementations of Markdown are available for over a dozen programming
Motivation
Motivation ->: 'Hi this is Molly', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Baz'}}Let's have a look at some examples below.from langchain.text_splitter import MarkdownHeaderTextSplittermarkdown_document = "# Foo\n\n ## Bar\n\nHi this is Jim\n\nHi this is Joe\n\n ### Boo \n\n Hi this is Lance \n\n ## Baz\n\n Hi this is Molly"headers_to_split_on = [ ("#", "Header 1"), ("##", "Header 2"), ("###", "Header 3"),]markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)md_header_splits = markdown_splitter.split_text(markdown_document)md_header_splits [Document(page_content='Hi this is Jim \nHi this is Joe', metadata={'Header 1': 'Foo', 'Header 2': 'Bar'}), Document(page_content='Hi this is Lance', metadata={'Header 1': 'Foo', 'Header 2': 'Bar', 'Header 3': 'Boo'}), Document(page_content='Hi this is Molly', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'})]type(md_header_splits[0]) langchain.schema.document.DocumentWithin each markdown group we can then apply any text splitter we want. markdown_document = "# Intro \n\n ## History \n\n Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9] \n\n Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files. \n\n ## Rise and divergence \n\n As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \n\n additional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks. \n\n #### Standardization \n\n From 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort. \n\n ## Implementations \n\n Implementations of Markdown are available for over a dozen programming
1,239
are available for over a dozen programming languages."headers_to_split_on = [ ("#", "Header 1"), ("##", "Header 2"),]# MD splitsmarkdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)md_header_splits = markdown_splitter.split_text(markdown_document)# Char-level splitsfrom langchain.text_splitter import RecursiveCharacterTextSplitterchunk_size = 250chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(md_header_splits)splits [Document(page_content='Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9]', metadata={'Header 1': 'Intro', 'Header 2': 'History'}), Document(page_content='Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files.', metadata={'Header 1': 'Intro', 'Header 2': 'History'}), Document(page_content='As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \nadditional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks. \n#### Standardization', metadata={'Header 1': 'Intro', 'Header 2': 'Rise and divergence'}), Document(page_content='#### Standardization \nFrom 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort.', metadata={'Header 1': 'Intro', 'Header 2': 'Rise and divergence'}), Document(page_content='Implementations of Markdown are available for over a dozen programming languages.', metadata={'Header 1': 'Intro', 'Header 2': 'Implementations'})]PreviousSplit codeNextRecursively split by characterMotivationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright
Motivation
Motivation ->: are available for over a dozen programming languages."headers_to_split_on = [ ("#", "Header 1"), ("##", "Header 2"),]# MD splitsmarkdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)md_header_splits = markdown_splitter.split_text(markdown_document)# Char-level splitsfrom langchain.text_splitter import RecursiveCharacterTextSplitterchunk_size = 250chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(md_header_splits)splits [Document(page_content='Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9]', metadata={'Header 1': 'Intro', 'Header 2': 'History'}), Document(page_content='Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files.', metadata={'Header 1': 'Intro', 'Header 2': 'History'}), Document(page_content='As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \nadditional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks. \n#### Standardization', metadata={'Header 1': 'Intro', 'Header 2': 'Rise and divergence'}), Document(page_content='#### Standardization \nFrom 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort.', metadata={'Header 1': 'Intro', 'Header 2': 'Rise and divergence'}), Document(page_content='Implementations of Markdown are available for over a dozen programming languages.', metadata={'Header 1': 'Intro', 'Header 2': 'Implementations'})]PreviousSplit codeNextRecursively split by characterMotivationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright
1,240
© 2023 LangChain, Inc.
Motivation
Motivation ->: © 2023 LangChain, Inc.
1,241
HTMLHeaderTextSplitter | 🦜️🔗 Langchain
Description and motivation
Description and motivation ->: HTMLHeaderTextSplitter | 🦜️🔗 Langchain
1,242
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersHTMLHeaderTextSplitterOn this pageHTMLHeaderTextSplitterDescription and motivation​Similar in concept to the MarkdownHeaderTextSplitter, the HTMLHeaderTextSplitter is a "structure-aware" chunker that splits text at the element level and adds metadata for each header "relevant" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline.Usage examples​1) With an HTML string:​from langchain.text_splitter import HTMLHeaderTextSplitterhtml_string ="""<!DOCTYPE html><html><body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div></body></html>"""headers_to_split_on = [ ("h1", "Header 1"),
Description and motivation
Description and motivation ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersHTMLHeaderTextSplitterSplit by characterSplit codeMarkdownHeaderTextSplitterRecursively split by characterSplit by tokensPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersText splittersHTMLHeaderTextSplitterOn this pageHTMLHeaderTextSplitterDescription and motivation​Similar in concept to the MarkdownHeaderTextSplitter, the HTMLHeaderTextSplitter is a "structure-aware" chunker that splits text at the element level and adds metadata for each header "relevant" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline.Usage examples​1) With an HTML string:​from langchain.text_splitter import HTMLHeaderTextSplitterhtml_string ="""<!DOCTYPE html><html><body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div></body></html>"""headers_to_split_on = [ ("h1", "Header 1"),
1,243
= [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)html_header_splits [Document(page_content='Foo'), Document(page_content='Some intro text about Foo. \nBar main section Bar subsection 1 Bar subsection 2', metadata={'Header 1': 'Foo'}), Document(page_content='Some intro text about Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section'}), Document(page_content='Some text about the first subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 1'}), Document(page_content='Some text about the second subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 2'}), Document(page_content='Baz', metadata={'Header 1': 'Foo'}), Document(page_content='Some text about Baz', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'}), Document(page_content='Some concluding text about Foo', metadata={'Header 1': 'Foo'})]2) Pipelined to another splitter, with html loaded from a web URL:​from langchain.text_splitter import RecursiveCharacterTextSplitterurl = "https://plato.stanford.edu/entries/goedel/"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ("h4", "Header 4"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)#for local file use html_splitter.split_text_from_file(<path_to_file>)html_header_splits = html_splitter.split_text_from_url(url)chunk_size = 500chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(html_header_splits)splits[80:85] [Document(page_content='We see that Gödel first tried to reduce the consistency problem for analysis to that of arithmetic. This seemed to require a truth
Description and motivation
Description and motivation ->: = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)html_header_splits [Document(page_content='Foo'), Document(page_content='Some intro text about Foo. \nBar main section Bar subsection 1 Bar subsection 2', metadata={'Header 1': 'Foo'}), Document(page_content='Some intro text about Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section'}), Document(page_content='Some text about the first subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 1'}), Document(page_content='Some text about the second subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 2'}), Document(page_content='Baz', metadata={'Header 1': 'Foo'}), Document(page_content='Some text about Baz', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'}), Document(page_content='Some concluding text about Foo', metadata={'Header 1': 'Foo'})]2) Pipelined to another splitter, with html loaded from a web URL:​from langchain.text_splitter import RecursiveCharacterTextSplitterurl = "https://plato.stanford.edu/entries/goedel/"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ("h4", "Header 4"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)#for local file use html_splitter.split_text_from_file(<path_to_file>)html_header_splits = html_splitter.split_text_from_url(url)chunk_size = 500chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(html_header_splits)splits[80:85] [Document(page_content='We see that Gödel first tried to reduce the consistency problem for analysis to that of arithmetic. This seemed to require a truth
1,244
of arithmetic. This seemed to require a truth definition for arithmetic, which in turn led to paradoxes, such as the Liar paradox (“This sentence is false”) and Berry’s paradox (“The least number not defined by an expression consisting of just fourteen English words”). Gödel then noticed that such paradoxes would not necessarily arise if truth were replaced by provability. But this means that arithmetic truth', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='means that arithmetic truth and arithmetic provability are not co-extensive — whence the First Incompleteness Theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='This account of Gödel’s discovery was told to Hao Wang very much after the fact; but in Gödel’s contemporary correspondence with Bernays and Zermelo, essentially the same description of his path to the theorems is given. (See Gödel 2003a and Gödel 2003b respectively.) From those accounts we see that the undefinability of truth in arithmetic, a result credited to Tarski, was likely obtained in some form by Gödel by 1931. But he neither publicized nor published the result; the biases logicians', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='result; the biases logicians had expressed at the time concerning the notion of truth, biases which came vehemently to the fore when Tarski announced his results on the undefinability of truth in formal systems 1935, may have served as a deterrent to Gödel’s publication of that theorem.',
Description and motivation
Description and motivation ->: of arithmetic. This seemed to require a truth definition for arithmetic, which in turn led to paradoxes, such as the Liar paradox (“This sentence is false”) and Berry’s paradox (“The least number not defined by an expression consisting of just fourteen English words”). Gödel then noticed that such paradoxes would not necessarily arise if truth were replaced by provability. But this means that arithmetic truth', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='means that arithmetic truth and arithmetic provability are not co-extensive — whence the First Incompleteness Theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='This account of Gödel’s discovery was told to Hao Wang very much after the fact; but in Gödel’s contemporary correspondence with Bernays and Zermelo, essentially the same description of his path to the theorems is given. (See Gödel 2003a and Gödel 2003b respectively.) From those accounts we see that the undefinability of truth in arithmetic, a result credited to Tarski, was likely obtained in some form by Gödel by 1931. But he neither publicized nor published the result; the biases logicians', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='result; the biases logicians had expressed at the time concerning the notion of truth, biases which came vehemently to the fore when Tarski announced his results on the undefinability of truth in formal systems 1935, may have served as a deterrent to Gödel’s publication of that theorem.',
1,245
to Gödel’s publication of that theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='We now describe the proof of the two theorems, formulating Gödel’s results in Peano arithmetic. Gödel himself used a system related to that defined in Principia Mathematica, but containing Peano arithmetic. In our presentation of the First and Second Incompleteness Theorems we refer to Peano arithmetic as P, following Gödel’s notation.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.2 The proof of the First Incompleteness Theorem'})]Limitations​There can be quite a bit of structural variation from one HTML document to another, and while HTMLHeaderTextSplitter will attempt to attach all "relevant" headers to any given chunk, it can sometimes miss certain headers. For example, the algorithm assumes an informational hierarchy in which headers are always at nodes "above" associated text, i.e. prior siblings, ancestors, and combinations thereof. In the following news article (as of the writing of this document), the document is structured such that the text of the top-level headline, while tagged "h1", is in a distinct subtree from the text elements that we'd expect it to be "above"—so we can observe that the "h1" element and its associated text do not show up in the chunk metadata (but, where applicable, we do see "h2" and its associated text): url = "https://www.cnn.com/2023/09/25/weather/el-nino-winter-us-climate/index.html"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)html_header_splits = html_splitter.split_text_from_url(url)print(html_header_splits[1].page_content[:500]) No two El Niño
Description and motivation
Description and motivation ->: to Gödel’s publication of that theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='We now describe the proof of the two theorems, formulating Gödel’s results in Peano arithmetic. Gödel himself used a system related to that defined in Principia Mathematica, but containing Peano arithmetic. In our presentation of the First and Second Incompleteness Theorems we refer to Peano arithmetic as P, following Gödel’s notation.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.2 The proof of the First Incompleteness Theorem'})]Limitations​There can be quite a bit of structural variation from one HTML document to another, and while HTMLHeaderTextSplitter will attempt to attach all "relevant" headers to any given chunk, it can sometimes miss certain headers. For example, the algorithm assumes an informational hierarchy in which headers are always at nodes "above" associated text, i.e. prior siblings, ancestors, and combinations thereof. In the following news article (as of the writing of this document), the document is structured such that the text of the top-level headline, while tagged "h1", is in a distinct subtree from the text elements that we'd expect it to be "above"—so we can observe that the "h1" element and its associated text do not show up in the chunk metadata (but, where applicable, we do see "h2" and its associated text): url = "https://www.cnn.com/2023/09/25/weather/el-nino-winter-us-climate/index.html"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)html_header_splits = html_splitter.split_text_from_url(url)print(html_header_splits[1].page_content[:500]) No two El Niño
1,246
No two El Niño winters are the same, but many have temperature and precipitation trends in common. Average conditions during an El Niño winter across the continental US. One of the major reasons is the position of the jet stream, which often shifts south during an El Niño winter. This shift typically brings wetter and cooler weather to the South while the North becomes drier and warmer, according to NOAA. Because the jet stream is essentially a river of air that storms flow through, thePreviousDocument transformersNextSplit by characterDescription and motivationUsage examplesLimitationsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Description and motivation
Description and motivation ->: No two El Niño winters are the same, but many have temperature and precipitation trends in common. Average conditions during an El Niño winter across the continental US. One of the major reasons is the position of the jet stream, which often shifts south during an El Niño winter. This shift typically brings wetter and cooler weather to the South while the North becomes drier and warmer, according to NOAA. Because the jet stream is essentially a river of air that storms flow through, thePreviousDocument transformersNextSplit by characterDescription and motivationUsage examplesLimitationsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,247
Caching | 🦜️🔗 Langchain
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Embeddings can be stored or temporarily cached to avoid needing to recompute them. ->: Caching | 🦜️🔗 Langchain
1,248
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsCachingVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalText embedding modelsCachingOn this pageCachingEmbeddings can be stored or temporarily cached to avoid needing to recompute them.Caching embeddings can be done using a CacheBackedEmbeddings. The cache backed embedder is a wrapper around an embedder that caches
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Embeddings can be stored or temporarily cached to avoid needing to recompute them. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsCachingVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalText embedding modelsCachingOn this pageCachingEmbeddings can be stored or temporarily cached to avoid needing to recompute them.Caching embeddings can be done using a CacheBackedEmbeddings. The cache backed embedder is a wrapper around an embedder that caches
1,249
embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache.The main supported way to initialized a CacheBackedEmbeddings is from_bytes_store. This takes in the following parameters:underlying_embedder: The embedder to use for embedding.document_embedding_cache: The cache to use for storing document embeddings.namespace: (optional, defaults to "") The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used.Attention: Be sure to set the namespace parameter to avoid collisions of the same text embedded using different embeddings models.from langchain.storage import InMemoryStore, LocalFileStore, RedisStore, UpstashRedisStorefrom langchain.embeddings import OpenAIEmbeddings, CacheBackedEmbeddingsUsing with a vector store‚ÄãFirst, let's see an example that uses the local file system for storing embeddings and uses FAISS vector store for retrieval.from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSunderlying_embeddings = OpenAIEmbeddings()fs = LocalFileStore("./cache/")cached_embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, fs, namespace=underlying_embeddings.model)The cache is empty prior to embedding:list(fs.yield_keys()) []Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader("../state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)Create the vector store:db = FAISS.from_documents(documents, cached_embedder) CPU times: user 608 ms, sys: 58.9 ms, total: 667 ms Wall time: 1.3 sIf we try to create the vector store again, it'll be much faster since it does not need to re-compute any
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Embeddings can be stored or temporarily cached to avoid needing to recompute them. ->: embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache.The main supported way to initialized a CacheBackedEmbeddings is from_bytes_store. This takes in the following parameters:underlying_embedder: The embedder to use for embedding.document_embedding_cache: The cache to use for storing document embeddings.namespace: (optional, defaults to "") The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used.Attention: Be sure to set the namespace parameter to avoid collisions of the same text embedded using different embeddings models.from langchain.storage import InMemoryStore, LocalFileStore, RedisStore, UpstashRedisStorefrom langchain.embeddings import OpenAIEmbeddings, CacheBackedEmbeddingsUsing with a vector store‚ÄãFirst, let's see an example that uses the local file system for storing embeddings and uses FAISS vector store for retrieval.from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSunderlying_embeddings = OpenAIEmbeddings()fs = LocalFileStore("./cache/")cached_embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, fs, namespace=underlying_embeddings.model)The cache is empty prior to embedding:list(fs.yield_keys()) []Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader("../state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)Create the vector store:db = FAISS.from_documents(documents, cached_embedder) CPU times: user 608 ms, sys: 58.9 ms, total: 667 ms Wall time: 1.3 sIf we try to create the vector store again, it'll be much faster since it does not need to re-compute any
1,250
faster since it does not need to re-compute any embeddings.db2 = FAISS.from_documents(documents, cached_embedder) CPU times: user 33.6 ms, sys: 3.96 ms, total: 37.6 ms Wall time: 36.8 msAnd here are some of the embeddings that got created:list(fs.yield_keys())[:5] ['text-embedding-ada-002614d7cf6-46f1-52fa-9d3a-740c39e7a20e', 'text-embedding-ada-0020fc1ede2-407a-5e14-8f8f-5642214263f5', 'text-embedding-ada-002e4ad20ef-dfaa-5916-9459-f90c6d8e8159', 'text-embedding-ada-002a5ef11e4-0474-5725-8d80-81c91943b37f', 'text-embedding-ada-00281426526-23fe-58be-9e84-6c7c72c8ca9a']In Memory‚ÄãThis section shows how to set up an in memory cache for embeddings. This type of cache is primarily
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Embeddings can be stored or temporarily cached to avoid needing to recompute them. ->: faster since it does not need to re-compute any embeddings.db2 = FAISS.from_documents(documents, cached_embedder) CPU times: user 33.6 ms, sys: 3.96 ms, total: 37.6 ms Wall time: 36.8 msAnd here are some of the embeddings that got created:list(fs.yield_keys())[:5] ['text-embedding-ada-002614d7cf6-46f1-52fa-9d3a-740c39e7a20e', 'text-embedding-ada-0020fc1ede2-407a-5e14-8f8f-5642214263f5', 'text-embedding-ada-002e4ad20ef-dfaa-5916-9459-f90c6d8e8159', 'text-embedding-ada-002a5ef11e4-0474-5725-8d80-81c91943b37f', 'text-embedding-ada-00281426526-23fe-58be-9e84-6c7c72c8ca9a']In Memory‚ÄãThis section shows how to set up an in memory cache for embeddings. This type of cache is primarily
1,251
useful for unit tests or prototyping. Do not use this cache if you need to actually store the embeddings.store = InMemoryStore()underlying_embeddings = OpenAIEmbeddings()embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)embeddings = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 10.9 ms, sys: 916 µs, total: 11.8 ms Wall time: 159 msThe second time we try to embed the embedding time is only 2 ms because the embeddings are looked up in the cache.embeddings_from_cache = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 1.67 ms, sys: 342 µs, total: 2.01 ms Wall time: 2.01 msembeddings == embeddings_from_cache TrueFile system​This section covers how to use a file system store.fs = LocalFileStore("./test_cache/")embedder2 = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, fs, namespace=underlying_embeddings.model)embeddings = embedder2.embed_documents(["hello", "goodbye"]) CPU times: user 6.89 ms, sys: 4.89 ms, total: 11.8 ms Wall time: 184 msembeddings = embedder2.embed_documents(["hello", "goodbye"]) CPU times: user 0 ns, sys: 3.24 ms, total: 3.24 ms Wall time: 2.84 msHere are the embeddings that have been persisted to the directory ./test_cache. Notice that the embedder takes a namespace parameter.list(fs.yield_keys()) ['text-embedding-ada-002e885db5b-c0bd-5fbc-88b1-4d1da6020aa5', 'text-embedding-ada-0026ba52e44-59c9-5cc9-a084-284061b13c80']Upstash Redis Store​from langchain.storage.upstash_redis import UpstashRedisStorefrom upstash_redis import RedisURL = "<UPSTASH_REDIS_REST_URL>"TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"redis_client = Redis(url=URL, token=TOKEN)store = UpstashRedisStore(client=redis_client, ttl=None, namespace="test-ns")underlying_embeddings = OpenAIEmbeddings()embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)embeddings =
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Embeddings can be stored or temporarily cached to avoid needing to recompute them. ->: useful for unit tests or prototyping. Do not use this cache if you need to actually store the embeddings.store = InMemoryStore()underlying_embeddings = OpenAIEmbeddings()embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)embeddings = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 10.9 ms, sys: 916 µs, total: 11.8 ms Wall time: 159 msThe second time we try to embed the embedding time is only 2 ms because the embeddings are looked up in the cache.embeddings_from_cache = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 1.67 ms, sys: 342 µs, total: 2.01 ms Wall time: 2.01 msembeddings == embeddings_from_cache TrueFile system​This section covers how to use a file system store.fs = LocalFileStore("./test_cache/")embedder2 = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, fs, namespace=underlying_embeddings.model)embeddings = embedder2.embed_documents(["hello", "goodbye"]) CPU times: user 6.89 ms, sys: 4.89 ms, total: 11.8 ms Wall time: 184 msembeddings = embedder2.embed_documents(["hello", "goodbye"]) CPU times: user 0 ns, sys: 3.24 ms, total: 3.24 ms Wall time: 2.84 msHere are the embeddings that have been persisted to the directory ./test_cache. Notice that the embedder takes a namespace parameter.list(fs.yield_keys()) ['text-embedding-ada-002e885db5b-c0bd-5fbc-88b1-4d1da6020aa5', 'text-embedding-ada-0026ba52e44-59c9-5cc9-a084-284061b13c80']Upstash Redis Store​from langchain.storage.upstash_redis import UpstashRedisStorefrom upstash_redis import RedisURL = "<UPSTASH_REDIS_REST_URL>"TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"redis_client = Redis(url=URL, token=TOKEN)store = UpstashRedisStore(client=redis_client, ttl=None, namespace="test-ns")underlying_embeddings = OpenAIEmbeddings()embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)embeddings =
1,252
= embedder.embed_documents(["welcome", "goodbye"])embeddings = embedder.embed_documents(["welcome", "goodbye"])list(store.yield_keys())list(store.client.scan(0))Redis Store​from langchain.storage import RedisStore# For cache isolation can use a separate DB# Or additional namepacestore = RedisStore(redis_url="redis://localhost:6379", client_kwargs={'db': 2}, namespace='embedding_caches')underlying_embeddings = OpenAIEmbeddings()embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)embeddings = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 3.99 ms, sys: 0 ns, total: 3.99 ms Wall time: 3.5 msembeddings = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 2.47 ms, sys: 767 µs, total: 3.24 ms Wall time: 2.75 mslist(store.yield_keys()) ['text-embedding-ada-002e885db5b-c0bd-5fbc-88b1-4d1da6020aa5', 'text-embedding-ada-0026ba52e44-59c9-5cc9-a084-284061b13c80']list(store.client.scan_iter()) [b'embedding_caches/text-embedding-ada-002e885db5b-c0bd-5fbc-88b1-4d1da6020aa5', b'embedding_caches/text-embedding-ada-0026ba52e44-59c9-5cc9-a084-284061b13c80']PreviousText embedding modelsNextVector storesUsing with a vector storeIn MemoryFile systemUpstash Redis StoreRedis StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Embeddings can be stored or temporarily cached to avoid needing to recompute them. ->: = embedder.embed_documents(["welcome", "goodbye"])embeddings = embedder.embed_documents(["welcome", "goodbye"])list(store.yield_keys())list(store.client.scan(0))Redis Store​from langchain.storage import RedisStore# For cache isolation can use a separate DB# Or additional namepacestore = RedisStore(redis_url="redis://localhost:6379", client_kwargs={'db': 2}, namespace='embedding_caches')underlying_embeddings = OpenAIEmbeddings()embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)embeddings = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 3.99 ms, sys: 0 ns, total: 3.99 ms Wall time: 3.5 msembeddings = embedder.embed_documents(["hello", "goodbye"]) CPU times: user 2.47 ms, sys: 767 µs, total: 3.24 ms Wall time: 2.75 mslist(store.yield_keys()) ['text-embedding-ada-002e885db5b-c0bd-5fbc-88b1-4d1da6020aa5', 'text-embedding-ada-0026ba52e44-59c9-5cc9-a084-284061b13c80']list(store.client.scan_iter()) [b'embedding_caches/text-embedding-ada-002e885db5b-c0bd-5fbc-88b1-4d1da6020aa5', b'embedding_caches/text-embedding-ada-0026ba52e44-59c9-5cc9-a084-284061b13c80']PreviousText embedding modelsNextVector storesUsing with a vector storeIn MemoryFile systemUpstash Redis StoreRedis StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,253
WebResearchRetriever | 🦜️🔗 Langchain
Given a query, this retriever will:
Given a query, this retriever will: ->: WebResearchRetriever | 🦜️🔗 Langchain
1,254
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversWebResearchRetrieverOn this pageWebResearchRetrieverGiven a query, this retriever will: Formulate a set of relate Google searchesSearch for each Load all the resulting URLsThen embed and perform similarity search with the query on the consolidate page contentfrom langchain.retrievers.web_research import WebResearchRetrieverSimple usage​Specify the LLM to use for Google search query generation.import osfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.utilities import GoogleSearchAPIWrapper# Vectorstorevectorstore = Chroma(embedding_function=OpenAIEmbeddings(),persist_directory="./chroma_db_oai")# LLMllm = ChatOpenAI(temperature=0)# Search os.environ["GOOGLE_CSE_ID"] = "xxx"os.environ["GOOGLE_API_KEY"] = "xxx"search = GoogleSearchAPIWrapper()# Initializeweb_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore, llm=llm, search=search, )Run with citations​We can use RetrievalQAWithSourcesChain to retrieve docs and provide citations.from langchain.chains import RetrievalQAWithSourcesChainuser_input = "How do LLM Powered Autonomous Agents work?"qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)result = qa_chain({"question":
Given a query, this retriever will:
Given a query, this retriever will: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversWebResearchRetrieverOn this pageWebResearchRetrieverGiven a query, this retriever will: Formulate a set of relate Google searchesSearch for each Load all the resulting URLsThen embed and perform similarity search with the query on the consolidate page contentfrom langchain.retrievers.web_research import WebResearchRetrieverSimple usage​Specify the LLM to use for Google search query generation.import osfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.utilities import GoogleSearchAPIWrapper# Vectorstorevectorstore = Chroma(embedding_function=OpenAIEmbeddings(),persist_directory="./chroma_db_oai")# LLMllm = ChatOpenAI(temperature=0)# Search os.environ["GOOGLE_CSE_ID"] = "xxx"os.environ["GOOGLE_API_KEY"] = "xxx"search = GoogleSearchAPIWrapper()# Initializeweb_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore, llm=llm, search=search, )Run with citations​We can use RetrievalQAWithSourcesChain to retrieve docs and provide citations.from langchain.chains import RetrievalQAWithSourcesChainuser_input = "How do LLM Powered Autonomous Agents work?"qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)result = qa_chain({"question":
1,255
= qa_chain({"question": user_input})result Fetching pages: 100%|###################################################################################################################################| 1/1 [00:00<00:00, 3.33it/s] {'question': 'How do LLM Powered Autonomous Agents work?', 'answer': "LLM Powered Autonomous Agents work by using LLM (large language model) as the core controller of the agent's brain. It is complemented by several key components, including planning, memory, and tool use. The agent system is designed to be a powerful general problem solver. \n", 'sources': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}Run with logging‚ÄãHere, we use get_relevant_documents method to return docs.# Runimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)user_input = "What is Task Decomposition in LLM Powered Autonomous Agents?"docs = web_research_retriever.get_relevant_documents(user_input) INFO:langchain.retrievers.web_research:Generating questions for Google Search ... INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '3. What role does task decomposition play in the functioning of LLM powered autonomous agents?\n', '4. Why is task decomposition important for LLM powered autonomous agents?\n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '3. What role does task decomposition play in the functioning of LLM powered autonomous agents?\n', '4. Why is task decomposition important for LLM powered
Given a query, this retriever will:
Given a query, this retriever will: ->: = qa_chain({"question": user_input})result Fetching pages: 100%|###################################################################################################################################| 1/1 [00:00<00:00, 3.33it/s] {'question': 'How do LLM Powered Autonomous Agents work?', 'answer': "LLM Powered Autonomous Agents work by using LLM (large language model) as the core controller of the agent's brain. It is complemented by several key components, including planning, memory, and tool use. The agent system is designed to be a powerful general problem solver. \n", 'sources': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}Run with logging‚ÄãHere, we use get_relevant_documents method to return docs.# Runimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)user_input = "What is Task Decomposition in LLM Powered Autonomous Agents?"docs = web_research_retriever.get_relevant_documents(user_input) INFO:langchain.retrievers.web_research:Generating questions for Google Search ... INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '3. What role does task decomposition play in the functioning of LLM powered autonomous agents?\n', '4. Why is task decomposition important for LLM powered autonomous agents?\n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '3. What role does task decomposition play in the functioning of LLM powered autonomous agents?\n', '4. Why is task decomposition important for LLM powered
1,256
is task decomposition important for LLM powered autonomous agents?\n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... In a LLM-powered autonomous agent system, LLM functions as the ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Agent System Overview In a LLM-powered autonomous agent system, ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}] INFO:langchain.retrievers.web_research:New URLs to load: []Generate answer using retrieved docs‚ÄãWe can use load_qa_chain for QA using the retrieved
Given a query, this retriever will:
Given a query, this retriever will: ->: is task decomposition important for LLM powered autonomous agents?\n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... In a LLM-powered autonomous agent system, LLM functions as the ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Agent System Overview In a LLM-powered autonomous agent system, ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}] INFO:langchain.retrievers.web_research:New URLs to load: []Generate answer using retrieved docs‚ÄãWe can use load_qa_chain for QA using the retrieved
1,257
can use load_qa_chain for QA using the retrieved docs.from langchain.chains.question_answering import load_qa_chainchain = load_qa_chain(llm, chain_type="stuff")output = chain({"input_documents": docs, "question": user_input},return_only_outputs=True)output['output_text'] 'Task decomposition in LLM-powered autonomous agents refers to the process of breaking down a complex task into smaller, more manageable subgoals. This allows the agent to efficiently handle and execute the individual steps required to complete the overall task. By decomposing the task, the agent can prioritize and organize its actions, making it easier to plan and execute the necessary steps towards achieving the desired outcome.'More flexibility‚ÄãPass an LLM chain with custom prompt and output parsing.import osimport refrom typing import Listfrom langchain.chains import LLMChainfrom pydantic import BaseModel, Fieldfrom langchain.prompts import PromptTemplatefrom langchain.output_parsers.pydantic import PydanticOutputParser# LLMChainsearch_prompt = PromptTemplate( input_variables=["question"], template="""You are an assistant tasked with improving Google search results. Generate FIVE Google search queries that are similar to this question. The output should be a numbered list of questions and each should have a question mark at the end: {question}""",)class LineList(BaseModel): """List of questions.""" lines: List[str] = Field(description="Questions")class QuestionListOutputParser(PydanticOutputParser): """Output parser for a list of numbered questions.""" def __init__(self) -> None: super().__init__(pydantic_object=LineList) def parse(self, text: str) -> LineList: lines = re.findall(r"\d+\..*?\n", text) return LineList(lines=lines) llm_chain = LLMChain( llm=llm, prompt=search_prompt, output_parser=QuestionListOutputParser(), )# Initializeweb_research_retriever_llm_chain = WebResearchRetriever(
Given a query, this retriever will:
Given a query, this retriever will: ->: can use load_qa_chain for QA using the retrieved docs.from langchain.chains.question_answering import load_qa_chainchain = load_qa_chain(llm, chain_type="stuff")output = chain({"input_documents": docs, "question": user_input},return_only_outputs=True)output['output_text'] 'Task decomposition in LLM-powered autonomous agents refers to the process of breaking down a complex task into smaller, more manageable subgoals. This allows the agent to efficiently handle and execute the individual steps required to complete the overall task. By decomposing the task, the agent can prioritize and organize its actions, making it easier to plan and execute the necessary steps towards achieving the desired outcome.'More flexibility‚ÄãPass an LLM chain with custom prompt and output parsing.import osimport refrom typing import Listfrom langchain.chains import LLMChainfrom pydantic import BaseModel, Fieldfrom langchain.prompts import PromptTemplatefrom langchain.output_parsers.pydantic import PydanticOutputParser# LLMChainsearch_prompt = PromptTemplate( input_variables=["question"], template="""You are an assistant tasked with improving Google search results. Generate FIVE Google search queries that are similar to this question. The output should be a numbered list of questions and each should have a question mark at the end: {question}""",)class LineList(BaseModel): """List of questions.""" lines: List[str] = Field(description="Questions")class QuestionListOutputParser(PydanticOutputParser): """Output parser for a list of numbered questions.""" def __init__(self) -> None: super().__init__(pydantic_object=LineList) def parse(self, text: str) -> LineList: lines = re.findall(r"\d+\..*?\n", text) return LineList(lines=lines) llm_chain = LLMChain( llm=llm, prompt=search_prompt, output_parser=QuestionListOutputParser(), )# Initializeweb_research_retriever_llm_chain = WebResearchRetriever(
1,258
= WebResearchRetriever( vectorstore=vectorstore, llm_chain=llm_chain, search=search, )# Rundocs = web_research_retriever_llm_chain.get_relevant_documents(user_input) INFO:langchain.retrievers.web_research:Generating questions for Google Search ... INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How do LLM powered autonomous agents use task decomposition?\n', '2. Why is task decomposition important for LLM powered autonomous agents?\n', '3. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '4. What are the benefits of task decomposition in LLM powered autonomous agents?\n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How do LLM powered autonomous agents use task decomposition?\n', '2. Why is task decomposition important for LLM powered autonomous agents?\n', '3. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '4. What are the benefits of task decomposition in LLM powered autonomous agents?\n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting
Given a query, this retriever will:
Given a query, this retriever will: ->: = WebResearchRetriever( vectorstore=vectorstore, llm_chain=llm_chain, search=search, )# Rundocs = web_research_retriever_llm_chain.get_relevant_documents(user_input) INFO:langchain.retrievers.web_research:Generating questions for Google Search ... INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How do LLM powered autonomous agents use task decomposition?\n', '2. Why is task decomposition important for LLM powered autonomous agents?\n', '3. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '4. What are the benefits of task decomposition in LLM powered autonomous agents?\n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How do LLM powered autonomous agents use task decomposition?\n', '2. Why is task decomposition important for LLM powered autonomous agents?\n', '3. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n', '4. What are the benefits of task decomposition in LLM powered autonomous agents?\n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting
1,259
can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:New URLs to load: ['https://lilianweng.github.io/posts/2023-06-23-agent/'] INFO:langchain.retrievers.web_research:Grabbing most relevant splits from urls ... Fetching pages: 100%|###################################################################################################################################| 1/1 [00:00<00:00, 6.32it/s]len(docs) 1Run locally‚ÄãSpecify LLM and embeddings that will run locally (e.g., on your laptop).from langchain.llms import LlamaCppfrom langchain.embeddings import GPT4AllEmbeddingsfrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlern_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])llama = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin",
Given a query, this retriever will:
Given a query, this retriever will: ->: can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:New URLs to load: ['https://lilianweng.github.io/posts/2023-06-23-agent/'] INFO:langchain.retrievers.web_research:Grabbing most relevant splits from urls ... Fetching pages: 100%|###################################################################################################################################| 1/1 [00:00<00:00, 6.32it/s]len(docs) 1Run locally‚ÄãSpecify LLM and embeddings that will run locally (e.g., on your laptop).from langchain.llms import LlamaCppfrom langchain.embeddings import GPT4AllEmbeddingsfrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlern_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])llama = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin",
1,260
n_gpu_layers=n_gpu_layers, n_batch=n_batch, n_ctx=4096, # Context window max_tokens=1000, # Max tokens to generate f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,)vectorstore_llama = Chroma(embedding_function=GPT4AllEmbeddings(),persist_directory="./chroma_db_llama") llama.cpp: loading model from /Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 4096 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.09 MB llama_model_load_internal: mem required = 9132.71 MB (+ 1608.00 MB per state) llama_new_context_with_model: kv self size = 3200.00 MB ggml_metal_init: allocating Found model file at /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin llama_new_context_with_model: max tensor size = 87.89 MB ggml_metal_init: using MPS ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x110fbd600 ggml_metal_init: loaded kernel_mul 0x110fbeb30 ggml_metal_init: loaded kernel_mul_row 0x110fbf350 ggml_metal_init: loaded kernel_scale 0x110fbf9e0 ggml_metal_init: loaded kernel_silu
Given a query, this retriever will:
Given a query, this retriever will: ->: n_gpu_layers=n_gpu_layers, n_batch=n_batch, n_ctx=4096, # Context window max_tokens=1000, # Max tokens to generate f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,)vectorstore_llama = Chroma(embedding_function=GPT4AllEmbeddings(),persist_directory="./chroma_db_llama") llama.cpp: loading model from /Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 4096 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.09 MB llama_model_load_internal: mem required = 9132.71 MB (+ 1608.00 MB per state) llama_new_context_with_model: kv self size = 3200.00 MB ggml_metal_init: allocating Found model file at /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin llama_new_context_with_model: max tensor size = 87.89 MB ggml_metal_init: using MPS ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x110fbd600 ggml_metal_init: loaded kernel_mul 0x110fbeb30 ggml_metal_init: loaded kernel_mul_row 0x110fbf350 ggml_metal_init: loaded kernel_scale 0x110fbf9e0 ggml_metal_init: loaded kernel_silu
1,261
ggml_metal_init: loaded kernel_silu 0x110fc0150 ggml_metal_init: loaded kernel_relu 0x110fbd950 ggml_metal_init: loaded kernel_gelu 0x110fbdbb0 ggml_metal_init: loaded kernel_soft_max 0x110fc14d0 ggml_metal_init: loaded kernel_diag_mask_inf 0x110fc1980 ggml_metal_init: loaded kernel_get_rows_f16 0x110fc22a0 ggml_metal_init: loaded kernel_get_rows_q4_0 0x110fc2ad0 ggml_metal_init: loaded kernel_get_rows_q4_1 0x110fc3260 ggml_metal_init: loaded kernel_get_rows_q2_K 0x110fc3ad0 ggml_metal_init: loaded kernel_get_rows_q3_K 0x110fc41c0 ggml_metal_init: loaded kernel_get_rows_q4_K 0x110fc48c0 ggml_metal_init: loaded kernel_get_rows_q5_K 0x110fc4fa0 ggml_metal_init: loaded kernel_get_rows_q6_K 0x110fc56a0 ggml_metal_init: loaded kernel_rms_norm 0x110fc5da0 ggml_metal_init: loaded kernel_norm 0x110fc64d0 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x2a5c19990 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x2a5c1d4a0 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x2a5c19fc0 ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x2a5c1dcc0 ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x2a5c1e420 ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x2a5c1edc0 ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x2a5c1fd90 ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x2a5c20540 ggml_metal_init: loaded kernel_rope 0x2a5c20d40 ggml_metal_init: loaded kernel_alibi_f32 0x2a5c21730 ggml_metal_init: loaded kernel_cpy_f32_f16
Given a query, this retriever will:
Given a query, this retriever will: ->: ggml_metal_init: loaded kernel_silu 0x110fc0150 ggml_metal_init: loaded kernel_relu 0x110fbd950 ggml_metal_init: loaded kernel_gelu 0x110fbdbb0 ggml_metal_init: loaded kernel_soft_max 0x110fc14d0 ggml_metal_init: loaded kernel_diag_mask_inf 0x110fc1980 ggml_metal_init: loaded kernel_get_rows_f16 0x110fc22a0 ggml_metal_init: loaded kernel_get_rows_q4_0 0x110fc2ad0 ggml_metal_init: loaded kernel_get_rows_q4_1 0x110fc3260 ggml_metal_init: loaded kernel_get_rows_q2_K 0x110fc3ad0 ggml_metal_init: loaded kernel_get_rows_q3_K 0x110fc41c0 ggml_metal_init: loaded kernel_get_rows_q4_K 0x110fc48c0 ggml_metal_init: loaded kernel_get_rows_q5_K 0x110fc4fa0 ggml_metal_init: loaded kernel_get_rows_q6_K 0x110fc56a0 ggml_metal_init: loaded kernel_rms_norm 0x110fc5da0 ggml_metal_init: loaded kernel_norm 0x110fc64d0 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x2a5c19990 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x2a5c1d4a0 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x2a5c19fc0 ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x2a5c1dcc0 ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x2a5c1e420 ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x2a5c1edc0 ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x2a5c1fd90 ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x2a5c20540 ggml_metal_init: loaded kernel_rope 0x2a5c20d40 ggml_metal_init: loaded kernel_alibi_f32 0x2a5c21730 ggml_metal_init: loaded kernel_cpy_f32_f16
1,262
loaded kernel_cpy_f32_f16 0x2a5c21ab0 ggml_metal_init: loaded kernel_cpy_f32_f32 0x2a5c22080 ggml_metal_init: loaded kernel_cpy_f16_f16 0x2a5c231d0 ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6984.52 / 21845.34) ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1040.00 MB, ( 8024.52 / 21845.34) ggml_metal_add_buffer: allocated 'kv ' buffer, size = 3202.00 MB, (11226.52 / 21845.34) ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 597.00 MB, (11823.52 / 21845.34) AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (12335.52 / 21845.34) objc[33471]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib (0x2c7368208) and /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x5ebf48208). One of the two will be used. Which one is undefined. objc[33471]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib (0x2c7368208) and /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x5ec374208). One of the two will be used. Which one is undefined.We supplied StreamingStdOutCallbackHandler(), so model outputs (e.g., generated questions) are streamed. We also have logging on, so we seem them there too.from
Given a query, this retriever will:
Given a query, this retriever will: ->: loaded kernel_cpy_f32_f16 0x2a5c21ab0 ggml_metal_init: loaded kernel_cpy_f32_f32 0x2a5c22080 ggml_metal_init: loaded kernel_cpy_f16_f16 0x2a5c231d0 ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6984.52 / 21845.34) ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1040.00 MB, ( 8024.52 / 21845.34) ggml_metal_add_buffer: allocated 'kv ' buffer, size = 3202.00 MB, (11226.52 / 21845.34) ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 597.00 MB, (11823.52 / 21845.34) AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (12335.52 / 21845.34) objc[33471]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib (0x2c7368208) and /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x5ebf48208). One of the two will be used. Which one is undefined. objc[33471]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib (0x2c7368208) and /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x5ec374208). One of the two will be used. Which one is undefined.We supplied StreamingStdOutCallbackHandler(), so model outputs (e.g., generated questions) are streamed. We also have logging on, so we seem them there too.from
1,263
have logging on, so we seem them there too.from langchain.chains import RetrievalQAWithSourcesChain# Initializeweb_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore_llama, llm=llama, search=search, )# Runuser_input = "What is Task Decomposition in LLM Powered Autonomous Agents?"qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llama,retriever=web_research_retriever)result = qa_chain({"question": user_input})result INFO:langchain.retrievers.web_research:Generating questions for Google Search ... Sure, here are five Google search queries that are similar to "What is Task Decomposition in LLM Powered Autonomous Agents?": 1. How does Task Decomposition work in LLM Powered Autonomous Agents? 2. What are the benefits of using Task Decomposition in LLM Powered Autonomous Agents? 3. Can you provide examples of Task Decomposition in LLM Powered Autonomous Agents? 4. How does Task Decomposition improve the performance of LLM Powered Autonomous Agents? 5. What are some common challenges or limitations of using Task Decomposition in LLM Powered Autonomous Agents, and how can they be addressed? llama_print_timings: load time = 8585.01 ms llama_print_timings: sample time = 124.24 ms / 164 runs ( 0.76 ms per token, 1320.04 tokens per second) llama_print_timings: prompt eval time = 8584.83 ms / 101 tokens ( 85.00 ms per token, 11.76 tokens per second) llama_print_timings: eval time = 7268.55 ms / 163 runs ( 44.59 ms per token, 22.43 tokens per second) llama_print_timings: total time = 16236.13 ms INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How does Task Decomposition work in LLM Powered Autonomous Agents? \n', '2. What are the benefits of using Task Decomposition in LLM Powered Autonomous Agents? \n',
Given a query, this retriever will:
Given a query, this retriever will: ->: have logging on, so we seem them there too.from langchain.chains import RetrievalQAWithSourcesChain# Initializeweb_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore_llama, llm=llama, search=search, )# Runuser_input = "What is Task Decomposition in LLM Powered Autonomous Agents?"qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llama,retriever=web_research_retriever)result = qa_chain({"question": user_input})result INFO:langchain.retrievers.web_research:Generating questions for Google Search ... Sure, here are five Google search queries that are similar to "What is Task Decomposition in LLM Powered Autonomous Agents?": 1. How does Task Decomposition work in LLM Powered Autonomous Agents? 2. What are the benefits of using Task Decomposition in LLM Powered Autonomous Agents? 3. Can you provide examples of Task Decomposition in LLM Powered Autonomous Agents? 4. How does Task Decomposition improve the performance of LLM Powered Autonomous Agents? 5. What are some common challenges or limitations of using Task Decomposition in LLM Powered Autonomous Agents, and how can they be addressed? llama_print_timings: load time = 8585.01 ms llama_print_timings: sample time = 124.24 ms / 164 runs ( 0.76 ms per token, 1320.04 tokens per second) llama_print_timings: prompt eval time = 8584.83 ms / 101 tokens ( 85.00 ms per token, 11.76 tokens per second) llama_print_timings: eval time = 7268.55 ms / 163 runs ( 44.59 ms per token, 22.43 tokens per second) llama_print_timings: total time = 16236.13 ms INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How does Task Decomposition work in LLM Powered Autonomous Agents? \n', '2. What are the benefits of using Task Decomposition in LLM Powered Autonomous Agents? \n',
1,264
in LLM Powered Autonomous Agents? \n', '3. Can you provide examples of Task Decomposition in LLM Powered Autonomous Agents? \n', '4. How does Task Decomposition improve the performance of LLM Powered Autonomous Agents? \n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How does Task Decomposition work in LLM Powered Autonomous Agents? \n', '2. What are the benefits of using Task Decomposition in LLM Powered Autonomous Agents? \n', '3. Can you provide examples of Task Decomposition in LLM Powered Autonomous Agents? \n', '4. How does Task Decomposition improve the performance of LLM Powered Autonomous Agents? \n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... A complicated task usually involves many steps. An agent needs to know what they are and plan ahead. Task Decomposition#. Chain of thought
Given a query, this retriever will:
Given a query, this retriever will: ->: in LLM Powered Autonomous Agents? \n', '3. Can you provide examples of Task Decomposition in LLM Powered Autonomous Agents? \n', '4. How does Task Decomposition improve the performance of LLM Powered Autonomous Agents? \n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How does Task Decomposition work in LLM Powered Autonomous Agents? \n', '2. What are the benefits of using Task Decomposition in LLM Powered Autonomous Agents? \n', '3. Can you provide examples of Task Decomposition in LLM Powered Autonomous Agents? \n', '4. How does Task Decomposition improve the performance of LLM Powered Autonomous Agents? \n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... A complicated task usually involves many steps. An agent needs to know what they are and plan ahead. Task Decomposition#. Chain of thought
1,265
plan ahead. Task Decomposition#. Chain of thought (CoT;\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Agent System Overview In a LLM-powered autonomous agent system, ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}] INFO:langchain.retrievers.web_research:New URLs to load: ['https://lilianweng.github.io/posts/2023-06-23-agent/'] INFO:langchain.retrievers.web_research:Grabbing most relevant splits from urls ... Fetching pages: 100%|###################################################################################################################################| 1/1 [00:00<00:00, 10.49it/s] Llama.generate: prefix-match hit The content discusses Task Decomposition in LLM Powered Autonomous Agents, which involves breaking down large tasks into smaller, manageable subgoals for efficient handling of complex tasks. SOURCES: https://lilianweng.github.io/posts/2023-06-23-agent/ llama_print_timings: load time = 8585.01 ms llama_print_timings: sample time = 52.88 ms / 72 runs ( 0.73 ms per token, 1361.55 tokens per second) llama_print_timings: prompt eval time = 125925.13 ms / 2358 tokens ( 53.40 ms per token, 18.73 tokens per second) llama_print_timings: eval time = 3504.16 ms / 71 runs ( 49.35 ms per token, 20.26 tokens per second) llama_print_timings: total time = 129584.60 ms {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'answer': ' The content discusses Task Decomposition in LLM Powered Autonomous Agents, which involves breaking down large tasks into smaller, manageable subgoals for efficient handling of complex tasks.\n', 'sources':
Given a query, this retriever will:
Given a query, this retriever will: ->: plan ahead. Task Decomposition#. Chain of thought (CoT;\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Agent System Overview In a LLM-powered autonomous agent system, ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}] INFO:langchain.retrievers.web_research:New URLs to load: ['https://lilianweng.github.io/posts/2023-06-23-agent/'] INFO:langchain.retrievers.web_research:Grabbing most relevant splits from urls ... Fetching pages: 100%|###################################################################################################################################| 1/1 [00:00<00:00, 10.49it/s] Llama.generate: prefix-match hit The content discusses Task Decomposition in LLM Powered Autonomous Agents, which involves breaking down large tasks into smaller, manageable subgoals for efficient handling of complex tasks. SOURCES: https://lilianweng.github.io/posts/2023-06-23-agent/ llama_print_timings: load time = 8585.01 ms llama_print_timings: sample time = 52.88 ms / 72 runs ( 0.73 ms per token, 1361.55 tokens per second) llama_print_timings: prompt eval time = 125925.13 ms / 2358 tokens ( 53.40 ms per token, 18.73 tokens per second) llama_print_timings: eval time = 3504.16 ms / 71 runs ( 49.35 ms per token, 20.26 tokens per second) llama_print_timings: total time = 129584.60 ms {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'answer': ' The content discusses Task Decomposition in LLM Powered Autonomous Agents, which involves breaking down large tasks into smaller, manageable subgoals for efficient handling of complex tasks.\n', 'sources':
1,266
handling of complex tasks.\n', 'sources': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}PreviousVector store-backed retrieverNextIndexingSimple usageMore flexibilityRun locallyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Given a query, this retriever will:
Given a query, this retriever will: ->: handling of complex tasks.\n', 'sources': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}PreviousVector store-backed retrieverNextIndexingSimple usageMore flexibilityRun locallyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,267
Time-weighted vector store retriever | 🦜️🔗 Langchain
This retriever uses a combination of semantic similarity and a time decay.
This retriever uses a combination of semantic similarity and a time decay. ->: Time-weighted vector store retriever | 🦜️🔗 Langchain
1,268
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversTime-weighted vector store retrieverOn this pageTime-weighted vector store retrieverThis retriever uses a combination of semantic similarity and a time decay.The algorithm for scoring them is:semantic_similarity + (1.0 - decay_rate) ^ hours_passedNotably, hours_passed refers to the hours passed since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain "fresh".import faissfrom datetime import datetime, timedeltafrom langchain.docstore import InMemoryDocstorefrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.retrievers import TimeWeightedVectorStoreRetrieverfrom langchain.schema import Documentfrom langchain.vectorstores import FAISSLow decay rate​A low decay rate (in this, to be extreme, we will set it close to 0) means memories will be "remembered" for longer. A decay rate of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup.# Define your embedding modelembeddings_model = OpenAIEmbeddings()# Initialize the vectorstore as emptyembedding_size = 1536index = faiss.IndexFlatL2(embedding_size)vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.0000000000000000000000001, k=1)yesterday
This retriever uses a combination of semantic similarity and a time decay.
This retriever uses a combination of semantic similarity and a time decay. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversTime-weighted vector store retrieverOn this pageTime-weighted vector store retrieverThis retriever uses a combination of semantic similarity and a time decay.The algorithm for scoring them is:semantic_similarity + (1.0 - decay_rate) ^ hours_passedNotably, hours_passed refers to the hours passed since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain "fresh".import faissfrom datetime import datetime, timedeltafrom langchain.docstore import InMemoryDocstorefrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.retrievers import TimeWeightedVectorStoreRetrieverfrom langchain.schema import Documentfrom langchain.vectorstores import FAISSLow decay rate​A low decay rate (in this, to be extreme, we will set it close to 0) means memories will be "remembered" for longer. A decay rate of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup.# Define your embedding modelembeddings_model = OpenAIEmbeddings()# Initialize the vectorstore as emptyembedding_size = 1536index = faiss.IndexFlatL2(embedding_size)vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.0000000000000000000000001, k=1)yesterday
1,269
k=1)yesterday = datetime.now() - timedelta(days=1)retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])retriever.add_documents([Document(page_content="hello foo")]) ['d7f85756-2371-4bdf-9140-052780a0f9b3']# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enoughretriever.get_relevant_documents("hello world") [Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 678341), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]High decay rate‚ÄãWith a high decay rate (e.g., several 9's), the recency score quickly goes to 0! If you set this all the way to 1, recency is 0 for all objects, once again making this equivalent to a vector lookup.# Define your embedding modelembeddings_model = OpenAIEmbeddings()# Initialize the vectorstore as emptyembedding_size = 1536index = faiss.IndexFlatL2(embedding_size)vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1)yesterday = datetime.now() - timedelta(days=1)retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])retriever.add_documents([Document(page_content="hello foo")]) ['40011466-5bbe-4101-bfd1-e22e7f505de2']# "Hello Foo" is returned first because "hello world" is mostly forgottenretriever.get_relevant_documents("hello world") [Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]Virtual time‚ÄãUsing some utils in LangChain, you can mock out the time component.from langchain.utils import mock_nowimport datetime# Notice the last access time is that date timewith mock_now(datetime.datetime(2011, 2, 3,
This retriever uses a combination of semantic similarity and a time decay.
This retriever uses a combination of semantic similarity and a time decay. ->: k=1)yesterday = datetime.now() - timedelta(days=1)retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])retriever.add_documents([Document(page_content="hello foo")]) ['d7f85756-2371-4bdf-9140-052780a0f9b3']# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enoughretriever.get_relevant_documents("hello world") [Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 678341), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]High decay rate‚ÄãWith a high decay rate (e.g., several 9's), the recency score quickly goes to 0! If you set this all the way to 1, recency is 0 for all objects, once again making this equivalent to a vector lookup.# Define your embedding modelembeddings_model = OpenAIEmbeddings()# Initialize the vectorstore as emptyembedding_size = 1536index = faiss.IndexFlatL2(embedding_size)vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1)yesterday = datetime.now() - timedelta(days=1)retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])retriever.add_documents([Document(page_content="hello foo")]) ['40011466-5bbe-4101-bfd1-e22e7f505de2']# "Hello Foo" is returned first because "hello world" is mostly forgottenretriever.get_relevant_documents("hello world") [Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]Virtual time‚ÄãUsing some utils in LangChain, you can mock out the time component.from langchain.utils import mock_nowimport datetime# Notice the last access time is that date timewith mock_now(datetime.datetime(2011, 2, 3,
1,270
timewith mock_now(datetime.datetime(2011, 2, 3, 10, 11)): print(retriever.get_relevant_documents("hello world")) [Document(page_content='hello world', metadata={'last_accessed_at': MockDateTime(2011, 2, 3, 10, 11), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]PreviousSelf-queryingNextVector store-backed retrieverLow decay rateHigh decay rateVirtual timeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This retriever uses a combination of semantic similarity and a time decay.
This retriever uses a combination of semantic similarity and a time decay. ->: timewith mock_now(datetime.datetime(2011, 2, 3, 10, 11)): print(retriever.get_relevant_documents("hello world")) [Document(page_content='hello world', metadata={'last_accessed_at': MockDateTime(2011, 2, 3, 10, 11), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})]PreviousSelf-queryingNextVector store-backed retrieverLow decay rateHigh decay rateVirtual timeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,271
Contextual compression | 🦜️🔗 Langchain
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: Contextual compression | 🦜️🔗 Langchain
1,272
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversContextual compressionOn this pageContextual compressionOne challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.To use the Contextual Compression Retriever, you'll need:a base retrievera Document CompressorThe Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.Get started​# Helper function for printing docsdef pretty_print_docs(docs): print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" +
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversContextual compressionOn this pageContextual compressionOne challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.To use the Contextual Compression Retriever, you'll need:a base retrievera Document CompressorThe Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.Get started​# Helper function for printing docsdef pretty_print_docs(docs): print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" +
1,273
* 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))Using a vanilla vector store retriever​Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.from langchain.text_splitter import CharacterTextSplitterfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSdocuments = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")pretty_print_docs(docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: A former top litigator in private practice. A
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))Using a vanilla vector store retriever​Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.from langchain.text_splitter import CharacterTextSplitterfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSdocuments = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")pretty_print_docs(docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: A former top litigator in private practice. A
1,274
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. ---------------------------------------------------------------------------------------------------- Document 3: And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. ---------------------------------------------------------------------------------------------------- Document 3: And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic.
1,275
together. First, beat the opioid epidemic. ---------------------------------------------------------------------------------------------------- Document 4: Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.Adding contextual compression with an LLMChainExtractor​Now let's wrap our base retriever with a ContextualCompressionRetriever. We'll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.from langchain.llms import OpenAIfrom langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import LLMChainExtractorllm = OpenAI(temperature=0)compressor = LLMChainExtractor.from_llm(llm)compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: "One of the most serious constitutional responsibilities a President has is nominating
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: together. First, beat the opioid epidemic. ---------------------------------------------------------------------------------------------------- Document 4: Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.Adding contextual compression with an LLMChainExtractor​Now let's wrap our base retriever with a ContextualCompressionRetriever. We'll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.from langchain.llms import OpenAIfrom langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import LLMChainExtractorllm = OpenAI(temperature=0)compressor = LLMChainExtractor.from_llm(llm)compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: "One of the most serious constitutional responsibilities a President has is nominating
1,276
responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence." ---------------------------------------------------------------------------------------------------- Document 2: "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."More built-in compressors: filters​LLMChainFilter​The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.from langchain.retrievers.document_compressors import LLMChainFilter_filter = LLMChainFilter.from_llm(llm)compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence." ---------------------------------------------------------------------------------------------------- Document 2: "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."More built-in compressors: filters​LLMChainFilter​The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.from langchain.retrievers.document_compressors import LLMChainFilter_filter = LLMChainFilter.from_llm(llm)compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United
1,277
has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.EmbeddingsFilter​Making an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.retrievers.document_compressors import EmbeddingsFilterembeddings = OpenAIEmbeddings()embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: A
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.EmbeddingsFilter​Making an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.retrievers.document_compressors import EmbeddingsFilterembeddings = OpenAIEmbeddings()embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: A
1,278
Document 2: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. ---------------------------------------------------------------------------------------------------- Document 3: And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: Document 2: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. ---------------------------------------------------------------------------------------------------- Document 3: And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid
1,279
we can do together. First, beat the opioid epidemic.Stringing compressors and document transformers togetherUsing the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents.Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.from langchain.document_transformers import EmbeddingsRedundantFilterfrom langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain.text_splitter import CharacterTextSplittersplitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)pipeline_compressor = DocumentCompressorPipeline( transformers=[splitter, redundant_filter, relevant_filter])compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson ---------------------------------------------------------------------------------------------------- Document 2:
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: we can do together. First, beat the opioid epidemic.Stringing compressors and document transformers togetherUsing the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents.Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.from langchain.document_transformers import EmbeddingsRedundantFilterfrom langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain.text_splitter import CharacterTextSplittersplitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)pipeline_compressor = DocumentCompressorPipeline( transformers=[splitter, redundant_filter, relevant_filter])compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson ---------------------------------------------------------------------------------------------------- Document 2:
1,280
Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builderPreviousMultiQueryRetrieverNextEnsemble RetrieverGet startedUsing a vanilla vector store retrieverAdding contextual compression with an LLMChainExtractorMore built-in compressors: filtersLLMChainFilterEmbeddingsFilterCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. ->: Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builderPreviousMultiQueryRetrieverNextEnsemble RetrieverGet startedUsing a vanilla vector store retrieverAdding contextual compression with an LLMChainExtractorMore built-in compressors: filtersLLMChainFilterEmbeddingsFilterCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,281
Document transformers | 🦜️🔗 Langchain
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools.
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools. ->: Document transformers | 🦜️🔗 Langchain
1,282
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersOn this pageDocument transformersinfoHead to Integrations for documentation on built-in document transformer integrations with 3rd-party tools.Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.Text splitters​When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text.
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools.
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText splittersPost retrievalText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument transformersOn this pageDocument transformersinfoHead to Integrations for documentation on built-in document transformer integrations with 3rd-party tools.Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.Text splitters​When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text.
1,283
This notebook showcases several ways to do that.At a high level, text splitters work as following:Split the text up into small, semantically meaningful chunks (often sentences).Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).That means there are two different axes along which you can customize your text splitter:How the text is splitHow the chunk size is measuredGet started with text splitters‚ÄãThe default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""]In addition to controlling which characters you can split on, you can also control a few other things:length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it's pretty common to pass a token counter here.chunk_size: the maximum size of your chunks (as measured by the length function).chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (e.g. do a sliding window).add_start_index: whether to include the starting position of each chunk within the original document in the metadata.# This is a long document we can split up.with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, add_start_index = True,)texts =
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools.
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools. ->: This notebook showcases several ways to do that.At a high level, text splitters work as following:Split the text up into small, semantically meaningful chunks (often sentences).Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).That means there are two different axes along which you can customize your text splitter:How the text is splitHow the chunk size is measuredGet started with text splitters‚ÄãThe default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""]In addition to controlling which characters you can split on, you can also control a few other things:length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it's pretty common to pass a token counter here.chunk_size: the maximum size of your chunks (as measured by the length function).chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (e.g. do a sliding window).add_start_index: whether to include the starting position of each chunk within the original document in the metadata.# This is a long document we can split up.with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, add_start_index = True,)texts =
1,284
= len, add_start_index = True,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0])print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' metadata={'start_index': 0} page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' metadata={'start_index': 82}Other transformations:‚ÄãFilter redundant docs, translate docs, extract metadata, and more‚ÄãWe can do perform a number of transformations on docs which are not simply splitting the text. With the
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools.
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools. ->: = len, add_start_index = True,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0])print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' metadata={'start_index': 0} page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' metadata={'start_index': 82}Other transformations:‚ÄãFilter redundant docs, translate docs, extract metadata, and more‚ÄãWe can do perform a number of transformations on docs which are not simply splitting the text. With the
1,285
EmbeddingsRedundantFilter we can identify similar documents and filter out redundancies. With integrations like doctran we can do things like translate documents from one language to another, extract desired properties and add them to metadata, and convert conversational dialogue into a Q/A format set of documents.PreviousPDFNextHTMLHeaderTextSplitterText splittersGet started with text splittersOther transformations:Filter redundant docs, translate docs, extract metadata, and moreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools.
Head to Integrations for documentation on built-in document transformer integrations with 3rd-party tools. ->: EmbeddingsRedundantFilter we can identify similar documents and filter out redundancies. With integrations like doctran we can do things like translate documents from one language to another, extract desired properties and add them to metadata, and convert conversational dialogue into a Q/A format set of documents.PreviousPDFNextHTMLHeaderTextSplitterText splittersGet started with text splittersOther transformations:Filter redundant docs, translate docs, extract metadata, and moreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,286
Vector store-backed retriever | 🦜️🔗 Langchain
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface. ->: Vector store-backed retriever | 🦜️🔗 Langchain
1,287
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversVector store-backed retrieverOn this pageVector store-backed retrieverA vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversVector store-backed retrieverOn this pageVector store-backed retrieverA vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
1,288
It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.Once you construct a vector store, it's very easy to construct a retriever. Let's walk through an example.from langchain.document_loaders import TextLoaderloader = TextLoader('../../../state_of_the_union.txt')from langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSfrom langchain.embeddings import OpenAIEmbeddingsdocuments = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = FAISS.from_documents(texts, embeddings) Exiting: Cleaning up .chroma directoryretriever = db.as_retriever()docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")Maximum marginal relevance retrieval‚ÄãBy default, the vector store retriever uses similarity search. If the underlying vector store supports maximum marginal relevance search, you can specify that as the search type.retriever = db.as_retriever(search_type="mmr")docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")Similarity score threshold retrieval‚ÄãYou can also a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold.retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5})docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")Specifying top k‚ÄãYou can also specify search kwargs like k to use when doing retrieval.retriever = db.as_retriever(search_kwargs={"k": 1})docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")len(docs) 1PreviousTime-weighted vector store retrieverNextWebResearchRetrieverMaximum marginal relevance retrievalSimilarity score threshold retrievalSpecifying top
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface. ->: It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.Once you construct a vector store, it's very easy to construct a retriever. Let's walk through an example.from langchain.document_loaders import TextLoaderloader = TextLoader('../../../state_of_the_union.txt')from langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSfrom langchain.embeddings import OpenAIEmbeddingsdocuments = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = FAISS.from_documents(texts, embeddings) Exiting: Cleaning up .chroma directoryretriever = db.as_retriever()docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")Maximum marginal relevance retrieval‚ÄãBy default, the vector store retriever uses similarity search. If the underlying vector store supports maximum marginal relevance search, you can specify that as the search type.retriever = db.as_retriever(search_type="mmr")docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")Similarity score threshold retrieval‚ÄãYou can also a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold.retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5})docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")Specifying top k‚ÄãYou can also specify search kwargs like k to use when doing retrieval.retriever = db.as_retriever(search_kwargs={"k": 1})docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")len(docs) 1PreviousTime-weighted vector store retrieverNextWebResearchRetrieverMaximum marginal relevance retrievalSimilarity score threshold retrievalSpecifying top
1,289
score threshold retrievalSpecifying top kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface. ->: score threshold retrievalSpecifying top kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,290
Vector stores | 🦜️🔗 Langchain
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: Vector stores | 🦜️🔗 Langchain
1,291
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalVector storesOn this pageVector storesinfoHead to Integrations for documentation on built-in integrations with 3rd-party vector stores.One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalVector storesOn this pageVector storesinfoHead to Integrations for documentation on built-in integrations with 3rd-party vector stores.One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
1,292
for you.Get started‚ÄãThis walkthrough showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.There are many great vector store options, here are a few that are free, open-source, and run entirely on your local machine. Review all integrations for many great hosted offerings.ChromaFAISSLanceThis walkthrough uses the chroma vector database, which runs on your local machine as a library.pip install chromadbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chroma# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)db = Chroma.from_documents(documents, OpenAIEmbeddings())This walkthrough uses the FAISS vector database, which makes use of the Facebook AI Similarity Search (FAISS) library.pip install faiss-cpuWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISS# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents =
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: for you.Get started‚ÄãThis walkthrough showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.There are many great vector store options, here are a few that are free, open-source, and run entirely on your local machine. Review all integrations for many great hosted offerings.ChromaFAISSLanceThis walkthrough uses the chroma vector database, which runs on your local machine as a library.pip install chromadbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chroma# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)db = Chroma.from_documents(documents, OpenAIEmbeddings())This walkthrough uses the FAISS vector database, which makes use of the Facebook AI Similarity Search (FAISS) library.pip install faiss-cpuWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISS# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents =
1,293
and load it into the vector store.raw_documents = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)db = FAISS.from_documents(documents, OpenAIEmbeddings())This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.pip install lancedbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import LanceDBimport lancedbdb = lancedb.connect("/tmp/lancedb")table = db.create_table( "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite",)# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)db = LanceDB.from_documents(documents, OpenAIEmbeddings(), connection=table)Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: and load it into the vector store.raw_documents = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)db = FAISS.from_documents(documents, OpenAIEmbeddings())This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.pip install lancedbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import LanceDBimport lancedbdb = lancedb.connect("/tmp/lancedb")table = db.create_table( "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite",)# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)db = LanceDB.from_documents(documents, OpenAIEmbeddings(), connection=table)Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you
1,294
States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search by vector​It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.embedding_vector = OpenAIEmbeddings().embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)print(docs[0].page_content)The query is the same, and so the result is also the same. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Asynchronous operations​Vector stores are usually run as a separate service that requires some IO operations, and therefore they might be called asynchronously. That gives performance benefits as you don't waste time waiting for responses from external services. That might also be important if you work with an asynchronous framework, such as FastAPI.LangChain supports
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search by vector​It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.embedding_vector = OpenAIEmbeddings().embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)print(docs[0].page_content)The query is the same, and so the result is also the same. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Asynchronous operations​Vector stores are usually run as a separate service that requires some IO operations, and therefore they might be called asynchronously. That gives performance benefits as you don't waste time waiting for responses from external services. That might also be important if you work with an asynchronous framework, such as FastAPI.LangChain supports
1,295
framework, such as FastAPI.LangChain supports async operation on vector stores. All the methods might be called using their async counterparts, with the prefix a, meaning async.Qdrant is a vector store, which supports all the async operations, thus it will be used in this walkthrough.pip install qdrant-clientfrom langchain.vectorstores import QdrantCreate a vector store asynchronously​db = await Qdrant.afrom_documents(documents, embeddings, "http://localhost:6333")Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = await db.asimilarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search by vector​embedding_vector = embeddings.embed_query(query)docs = await db.asimilarity_search_by_vector(embedding_vector)Maximum marginal relevance search (MMR)​Maximal marginal relevance optimizes for similarity to query and diversity among selected documents. It is also supported in async API.query = "What did the president say about Ketanji Brown Jackson"found_docs = await qdrant.amax_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")1. Tonight. I call on the
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: framework, such as FastAPI.LangChain supports async operation on vector stores. All the methods might be called using their async counterparts, with the prefix a, meaning async.Qdrant is a vector store, which supports all the async operations, thus it will be used in this walkthrough.pip install qdrant-clientfrom langchain.vectorstores import QdrantCreate a vector store asynchronously​db = await Qdrant.afrom_documents(documents, embeddings, "http://localhost:6333")Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = await db.asimilarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search by vector​embedding_vector = embeddings.embed_query(query)docs = await db.asimilarity_search_by_vector(embedding_vector)Maximum marginal relevance search (MMR)​Maximal marginal relevance optimizes for similarity to query and diversity among selected documents. It is also supported in async API.query = "What did the president say about Ketanji Brown Jackson"found_docs = await qdrant.amax_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")1. Tonight. I call on the
1,296
doc.page_content, "\n")1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together.I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera.They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.Officer Mora was 27 years old.Officer Rivera was 22.Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.I’ve worked on these issues a long time.I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.PreviousCachingNextRetrieversGet startedSimilarity searchSimilarity search by vectorAsynchronous operationsCreate a vector store asynchronouslySimilarity searchSimilarity search by vectorMaximum marginal relevance search
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: doc.page_content, "\n")1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together.I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera.They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.Officer Mora was 27 years old.Officer Rivera was 22.Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.I’ve worked on these issues a long time.I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.PreviousCachingNextRetrieversGet startedSimilarity searchSimilarity search by vectorAsynchronous operationsCreate a vector store asynchronouslySimilarity searchSimilarity search by vectorMaximum marginal relevance search
1,297
search by vectorMaximum marginal relevance search (MMR)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores.
Head to Integrations for documentation on built-in integrations with 3rd-party vector stores. ->: search by vectorMaximum marginal relevance search (MMR)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,298
MultiVector Retriever | 🦜️🔗 Langchain
It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever.
It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. ->: MultiVector Retriever | 🦜️🔗 Langchain
1,299
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversMultiVector RetrieverOn this pageMultiVector RetrieverIt can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever.The methods to create multiple vectors per document include:Smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever).Summary: create a summary for each document, embed that along with (or instead of) the document.Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.from langchain.retrievers.multi_vector import MultiVectorRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.storage import
It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever.
It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversMultiVector RetrieverOn this pageMultiVector RetrieverIt can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever.The methods to create multiple vectors per document include:Smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever).Summary: create a summary for each document, embed that along with (or instead of) the document.Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.from langchain.retrievers.multi_vector import MultiVectorRetrieverfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.storage import