Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
1,300 | langchain.storage import InMemoryStorefrom langchain.document_loaders import TextLoaderloaders = [ TextLoader('../../paul_graham_essay.txt'), TextLoader('../../state_of_the_union.txt'),]docs = []for l in loaders: docs.extend(l.load())text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)docs = text_splitter.split_documents(docs)Smaller chunks​Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the ParentDocumentRetriever does. Here we show what is going on under the hood.# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="full_documents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()id_key = "doc_id"# The retriever (empty to start)retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key,)import uuiddoc_ids = [str(uuid.uuid4()) for _ in docs]# The splitter to use to create smaller chunkschild_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)sub_docs = []for i, doc in enumerate(docs): _id = doc_ids[i] _sub_docs = child_text_splitter.split_documents([doc]) for _doc in _sub_docs: _doc.metadata[id_key] = _id sub_docs.extend(_sub_docs)retriever.vectorstore.add_documents(sub_docs)retriever.docstore.mset(list(zip(doc_ids, docs)))# Vectorstore alone retrieves the small chunksretriever.vectorstore.similarity_search("justice breyer")[0] Document(page_content='Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. ->: langchain.storage import InMemoryStorefrom langchain.document_loaders import TextLoaderloaders = [ TextLoader('../../paul_graham_essay.txt'), TextLoader('../../state_of_the_union.txt'),]docs = []for l in loaders: docs.extend(l.load())text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)docs = text_splitter.split_documents(docs)Smaller chunks​Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the ParentDocumentRetriever does. Here we show what is going on under the hood.# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="full_documents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()id_key = "doc_id"# The retriever (empty to start)retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key,)import uuiddoc_ids = [str(uuid.uuid4()) for _ in docs]# The splitter to use to create smaller chunkschild_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)sub_docs = []for i, doc in enumerate(docs): _id = doc_ids[i] _sub_docs = child_text_splitter.split_documents([doc]) for _doc in _sub_docs: _doc.metadata[id_key] = _id sub_docs.extend(_sub_docs)retriever.vectorstore.add_documents(sub_docs)retriever.docstore.mset(list(zip(doc_ids, docs)))# Vectorstore alone retrieves the small chunksretriever.vectorstore.similarity_search("justice breyer")[0] Document(page_content='Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a |
1,301 | most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '10e9cbc0-4ba5-4d79-a09b-c033d1ba7b01', 'source': '../../state_of_the_union.txt'})# Retriever returns larger chunkslen(retriever.get_relevant_documents("justice breyer")[0].page_content) 9874Summary‚ÄãOftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those.from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserimport uuidfrom langchain.schema.document import Documentchain = ( {"doc": lambda x: x.page_content} | ChatPromptTemplate.from_template("Summarize the following document:\n\n{doc}") | ChatOpenAI(max_retries=0) | StrOutputParser())summaries = chain.batch(docs, {"max_concurrency": 5})# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="summaries", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()id_key = "doc_id"# The retriever (empty to start)retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key,)doc_ids = [str(uuid.uuid4()) for _ in docs]summary_docs = [Document(page_content=s,metadata={id_key: doc_ids[i]}) for i, s in enumerate(summaries)]retriever.vectorstore.add_documents(summary_docs)retriever.docstore.mset(list(zip(doc_ids, docs)))# # We can also add the original chunks to the vectorstore if we so want# for i, doc in enumerate(docs):# doc.metadata[id_key] = doc_ids[i]# retriever.vectorstore.add_documents(docs)sub_docs = vectorstore.similarity_search("justice breyer")sub_docs[0] Document(page_content="The document is a transcript of a speech given by the President of the United States. The President discusses several important issues and | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. ->: most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '10e9cbc0-4ba5-4d79-a09b-c033d1ba7b01', 'source': '../../state_of_the_union.txt'})# Retriever returns larger chunkslen(retriever.get_relevant_documents("justice breyer")[0].page_content) 9874Summary‚ÄãOftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those.from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserimport uuidfrom langchain.schema.document import Documentchain = ( {"doc": lambda x: x.page_content} | ChatPromptTemplate.from_template("Summarize the following document:\n\n{doc}") | ChatOpenAI(max_retries=0) | StrOutputParser())summaries = chain.batch(docs, {"max_concurrency": 5})# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="summaries", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()id_key = "doc_id"# The retriever (empty to start)retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key,)doc_ids = [str(uuid.uuid4()) for _ in docs]summary_docs = [Document(page_content=s,metadata={id_key: doc_ids[i]}) for i, s in enumerate(summaries)]retriever.vectorstore.add_documents(summary_docs)retriever.docstore.mset(list(zip(doc_ids, docs)))# # We can also add the original chunks to the vectorstore if we so want# for i, doc in enumerate(docs):# doc.metadata[id_key] = doc_ids[i]# retriever.vectorstore.add_documents(docs)sub_docs = vectorstore.similarity_search("justice breyer")sub_docs[0] Document(page_content="The document is a transcript of a speech given by the President of the United States. The President discusses several important issues and |
1,302 | President discusses several important issues and initiatives, including the nomination of a Supreme Court Justice, border security and immigration reform, protecting women's rights, advancing LGBTQ+ equality, bipartisan legislation, addressing the opioid epidemic and mental health, supporting veterans, investigating the health effects of burn pits on military personnel, ending cancer, and the strength and resilience of the American people.", metadata={'doc_id': '79fa2e9f-28d9-4372-8af3-2caf4f1de312'})retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 9194Hypothetical Queries‚ÄãAn LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embeddedfunctions = [ { "name": "hypothetical_questions", "description": "Generate hypothetical questions", "parameters": { "type": "object", "properties": { "questions": { "type": "array", "items": { "type": "string" }, }, }, "required": ["questions"] } } ]from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParserchain = ( {"doc": lambda x: x.page_content} # Only asking for 3 hypothetical questions, but this could be adjusted | ChatPromptTemplate.from_template("Generate a list of 3 hypothetical questions that the below document could be used to answer:\n\n{doc}") | ChatOpenAI(max_retries=0, model="gpt-4").bind(functions=functions, function_call={"name": "hypothetical_questions"}) | JsonKeyOutputFunctionsParser(key_name="questions"))chain.invoke(docs[0]) ["What was the author's initial impression of philosophy as a field of study, and how did it change when they got to college?", 'Why did the author decide to switch their focus to Artificial Intelligence (AI)?', "What led to the author's disillusionment with the field of AI as | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. ->: President discusses several important issues and initiatives, including the nomination of a Supreme Court Justice, border security and immigration reform, protecting women's rights, advancing LGBTQ+ equality, bipartisan legislation, addressing the opioid epidemic and mental health, supporting veterans, investigating the health effects of burn pits on military personnel, ending cancer, and the strength and resilience of the American people.", metadata={'doc_id': '79fa2e9f-28d9-4372-8af3-2caf4f1de312'})retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 9194Hypothetical Queries‚ÄãAn LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embeddedfunctions = [ { "name": "hypothetical_questions", "description": "Generate hypothetical questions", "parameters": { "type": "object", "properties": { "questions": { "type": "array", "items": { "type": "string" }, }, }, "required": ["questions"] } } ]from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParserchain = ( {"doc": lambda x: x.page_content} # Only asking for 3 hypothetical questions, but this could be adjusted | ChatPromptTemplate.from_template("Generate a list of 3 hypothetical questions that the below document could be used to answer:\n\n{doc}") | ChatOpenAI(max_retries=0, model="gpt-4").bind(functions=functions, function_call={"name": "hypothetical_questions"}) | JsonKeyOutputFunctionsParser(key_name="questions"))chain.invoke(docs[0]) ["What was the author's initial impression of philosophy as a field of study, and how did it change when they got to college?", 'Why did the author decide to switch their focus to Artificial Intelligence (AI)?', "What led to the author's disillusionment with the field of AI as |
1,303 | author's disillusionment with the field of AI as it was practiced at the time?"]hypothetical_questions = chain.batch(docs, {"max_concurrency": 5})# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="hypo-questions", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()id_key = "doc_id"# The retriever (empty to start)retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key,)doc_ids = [str(uuid.uuid4()) for _ in docs]question_docs = []for i, question_list in enumerate(hypothetical_questions): question_docs.extend([Document(page_content=s,metadata={id_key: doc_ids[i]}) for s in question_list])retriever.vectorstore.add_documents(question_docs)retriever.docstore.mset(list(zip(doc_ids, docs)))sub_docs = vectorstore.similarity_search("justice breyer")sub_docs [Document(page_content="What is the President's stance on immigration reform?", metadata={'doc_id': '505d73e3-8350-46ec-a58e-3af032f04ab3'}), Document(page_content="What is the President's stance on immigration reform?", metadata={'doc_id': '1c9618f0-7660-4b4f-a37c-509cbbbf6dba'}), Document(page_content="What is the President's stance on immigration reform?", metadata={'doc_id': '82c08209-b904-46a8-9532-edd2380950b7'}), Document(page_content='What measures is the President proposing to protect the rights of LGBTQ+ Americans?', metadata={'doc_id': '82c08209-b904-46a8-9532-edd2380950b7'})]retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 9194PreviousEnsemble RetrieverNextParent Document RetrieverSmaller chunksSummaryHypothetical QueriesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. | It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base MultiVectorRetriever which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the MultiVectorRetriever. ->: author's disillusionment with the field of AI as it was practiced at the time?"]hypothetical_questions = chain.batch(docs, {"max_concurrency": 5})# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="hypo-questions", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()id_key = "doc_id"# The retriever (empty to start)retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key,)doc_ids = [str(uuid.uuid4()) for _ in docs]question_docs = []for i, question_list in enumerate(hypothetical_questions): question_docs.extend([Document(page_content=s,metadata={id_key: doc_ids[i]}) for s in question_list])retriever.vectorstore.add_documents(question_docs)retriever.docstore.mset(list(zip(doc_ids, docs)))sub_docs = vectorstore.similarity_search("justice breyer")sub_docs [Document(page_content="What is the President's stance on immigration reform?", metadata={'doc_id': '505d73e3-8350-46ec-a58e-3af032f04ab3'}), Document(page_content="What is the President's stance on immigration reform?", metadata={'doc_id': '1c9618f0-7660-4b4f-a37c-509cbbbf6dba'}), Document(page_content="What is the President's stance on immigration reform?", metadata={'doc_id': '82c08209-b904-46a8-9532-edd2380950b7'}), Document(page_content='What measures is the President proposing to protect the rights of LGBTQ+ Americans?', metadata={'doc_id': '82c08209-b904-46a8-9532-edd2380950b7'})]retrieved_docs = retriever.get_relevant_documents("justice breyer")len(retrieved_docs[0].page_content) 9194PreviousEnsemble RetrieverNextParent Document RetrieverSmaller chunksSummaryHypothetical QueriesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,304 | MultiQueryRetriever | ü¶úÔ∏èüîó Langchain | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. ->: MultiQueryRetriever | ü¶úÔ∏èüîó Langchain |
1,305 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversMultiQueryRetrieverOn this pageMultiQueryRetrieverDistance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious.The MultiQueryRetriever automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the MultiQueryRetriever might be able to overcome some of the limitations of the distance-based retrieval and get a richer set of results.# Build a sample vectorDBfrom langchain.vectorstores import Chromafrom langchain.document_loaders import WebBaseLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitter# Load blog postloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()# | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalDocument loadersDocument transformersText embedding modelsVector storesRetrieversMultiQueryRetrieverContextual compressionEnsemble RetrieverMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverVector store-backed retrieverWebResearchRetrieverIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalRetrieversMultiQueryRetrieverOn this pageMultiQueryRetrieverDistance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious.The MultiQueryRetriever automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the MultiQueryRetriever might be able to overcome some of the limitations of the distance-based retrieval and get a richer set of results.# Build a sample vectorDBfrom langchain.vectorstores import Chromafrom langchain.document_loaders import WebBaseLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitter# Load blog postloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()# |
1,306 | = loader.load()# Splittext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)splits = text_splitter.split_documents(data)# VectorDBembedding = OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=splits, embedding=embedding)Simple usage‚ÄãSpecify the LLM to use for query generation, and the retriever will do the rest.from langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import MultiQueryRetrieverquestion = "What are the approaches to Task Decomposition?"llm = ChatOpenAI(temperature=0)retriever_from_llm = MultiQueryRetriever.from_llm( retriever=vectordb.as_retriever(), llm=llm)# Set logging for the queriesimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)unique_docs = retriever_from_llm.get_relevant_documents(query=question)len(unique_docs) INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be approached?', '2. What are the different methods for Task Decomposition?', '3. What are the various approaches to decomposing tasks?'] 5Supplying your own prompt‚ÄãYou can also supply a prompt along with an output parser to split the results into a list of queries.from typing import Listfrom langchain.chains import LLMChainfrom pydantic import BaseModel, Fieldfrom langchain.prompts import PromptTemplatefrom langchain.output_parsers import PydanticOutputParser# Output parser will split the LLM result into a list of queriesclass LineList(BaseModel): # "lines" is the key (attribute name) of the parsed output lines: List[str] = Field(description="Lines of text")class LineListOutputParser(PydanticOutputParser): def __init__(self) -> None: super().__init__(pydantic_object=LineList) def parse(self, text: str) -> LineList: lines = text.strip().split("\n") return LineList(lines=lines)output_parser = LineListOutputParser()QUERY_PROMPT = PromptTemplate( input_variables=["question"], | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. ->: = loader.load()# Splittext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)splits = text_splitter.split_documents(data)# VectorDBembedding = OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=splits, embedding=embedding)Simple usage‚ÄãSpecify the LLM to use for query generation, and the retriever will do the rest.from langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import MultiQueryRetrieverquestion = "What are the approaches to Task Decomposition?"llm = ChatOpenAI(temperature=0)retriever_from_llm = MultiQueryRetriever.from_llm( retriever=vectordb.as_retriever(), llm=llm)# Set logging for the queriesimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)unique_docs = retriever_from_llm.get_relevant_documents(query=question)len(unique_docs) INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be approached?', '2. What are the different methods for Task Decomposition?', '3. What are the various approaches to decomposing tasks?'] 5Supplying your own prompt‚ÄãYou can also supply a prompt along with an output parser to split the results into a list of queries.from typing import Listfrom langchain.chains import LLMChainfrom pydantic import BaseModel, Fieldfrom langchain.prompts import PromptTemplatefrom langchain.output_parsers import PydanticOutputParser# Output parser will split the LLM result into a list of queriesclass LineList(BaseModel): # "lines" is the key (attribute name) of the parsed output lines: List[str] = Field(description="Lines of text")class LineListOutputParser(PydanticOutputParser): def __init__(self) -> None: super().__init__(pydantic_object=LineList) def parse(self, text: str) -> LineList: lines = text.strip().split("\n") return LineList(lines=lines)output_parser = LineListOutputParser()QUERY_PROMPT = PromptTemplate( input_variables=["question"], |
1,307 | input_variables=["question"], template="""You are an AI language model assistant. Your task is to generate five different versions of the given user question to retrieve relevant documents from a vector database. By generating multiple perspectives on the user question, your goal is to help the user overcome some of the limitations of the distance-based similarity search. Provide these alternative questions separated by newlines. Original question: {question}""",)llm = ChatOpenAI(temperature=0)# Chainllm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)# Other inputsquestion = "What are the approaches to Task Decomposition?"# Runretriever = MultiQueryRetriever( retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key="lines") # "lines" is the key (attribute name) of the parsed output# Resultsunique_docs = retriever.get_relevant_documents( query="What does the course say about regression?")len(unique_docs) INFO:langchain.retrievers.multi_query:Generated queries: ["1. What is the course's perspective on regression?", '2. Can you provide information on regression as discussed in the course?', '3. How does the course cover the topic of regression?', "4. What are the course's teachings on regression?", '5. In relation to the course, what is mentioned about regression?'] 11PreviousRetrieversNextContextual compressionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. | Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on "distance". But, retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious. ->: input_variables=["question"], template="""You are an AI language model assistant. Your task is to generate five different versions of the given user question to retrieve relevant documents from a vector database. By generating multiple perspectives on the user question, your goal is to help the user overcome some of the limitations of the distance-based similarity search. Provide these alternative questions separated by newlines. Original question: {question}""",)llm = ChatOpenAI(temperature=0)# Chainllm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)# Other inputsquestion = "What are the approaches to Task Decomposition?"# Runretriever = MultiQueryRetriever( retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key="lines") # "lines" is the key (attribute name) of the parsed output# Resultsunique_docs = retriever.get_relevant_documents( query="What does the course say about regression?")len(unique_docs) INFO:langchain.retrievers.multi_query:Generated queries: ["1. What is the course's perspective on regression?", '2. Can you provide information on regression as discussed in the course?', '3. How does the course cover the topic of regression?', "4. What are the course's teachings on regression?", '5. In relation to the course, what is mentioned about regression?'] 11PreviousRetrieversNextContextual compressionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,308 | Stuff | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsStuffStuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.This chain is well-suited for applications where documents are small and only a few are passed in for most calls.PreviousDocumentsNextRefineCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | The stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM. | The stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM. ->: Stuff | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsStuffStuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.This chain is well-suited for applications where documents are small and only a few are passed in for most calls.PreviousDocumentsNextRefineCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
1,309 | Adding memory (state) | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toAdding memory (state)On this pageAdding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started‚Äãfrom langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemoryconversation = ConversationChain( llm=chat, memory=ConversationBufferMemory())conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")# -> The first three colors of a rainbow are red, orange, and yellow.conversation.run("And the next 4?")# -> The next four colors of a rainbow are green, blue, indigo, and violet. 'The next four colors of a rainbow are green, blue, indigo, and violet.'Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in the Memory section.PreviousLoading from LangChainHubNextUsing OpenAI functionsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful. | Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful. ->: Adding memory (state) | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toAdding memory (state)On this pageAdding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started‚Äãfrom langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemoryconversation = ConversationChain( llm=chat, memory=ConversationBufferMemory())conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")# -> The first three colors of a rainbow are red, orange, and yellow.conversation.run("And the next 4?")# -> The next four colors of a rainbow are green, blue, indigo, and violet. 'The next four colors of a rainbow are green, blue, indigo, and violet.'Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in the Memory section.PreviousLoading from LangChainHubNextUsing OpenAI functionsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
1,310 | Custom chain | ü¶úÔ∏èüîó Langchain | To implement your own custom chain you can subclass Chain and implement the following methods: | To implement your own custom chain you can subclass Chain and implement the following methods: ->: Custom chain | ü¶úÔ∏èüîó Langchain |
1,311 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toCustom chainCustom chainTo implement your own custom chain you can subclass Chain and implement the following methods:from __future__ import annotationsfrom typing import Any, Dict, List, Optionalfrom pydantic import Extrafrom langchain.schema.language_model import BaseLanguageModelfrom langchain.callbacks.manager import ( AsyncCallbackManagerForChainRun, CallbackManagerForChainRun,)from langchain.chains.base import Chainfrom langchain.prompts.base import BasePromptTemplateclass MyCustomChain(Chain): """ An example of a custom chain. """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) | To implement your own custom chain you can subclass Chain and implement the following methods: | To implement your own custom chain you can subclass Chain and implement the following methods: ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toCustom chainCustom chainTo implement your own custom chain you can subclass Chain and implement the following methods:from __future__ import annotationsfrom typing import Any, Dict, List, Optionalfrom pydantic import Extrafrom langchain.schema.language_model import BaseLanguageModelfrom langchain.callbacks.manager import ( AsyncCallbackManagerForChainRun, CallbackManagerForChainRun,)from langchain.chains.base import Chainfrom langchain.prompts.base import BasePromptTemplateclass MyCustomChain(Chain): """ An example of a custom chain. """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) |
1,312 | = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = self.llm.generate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} async def _acall( self, inputs: Dict[str, Any], run_manager: Optional[AsyncCallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = await self.llm.agenerate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: await run_manager.on_text("Log something about this run") return | To implement your own custom chain you can subclass Chain and implement the following methods: | To implement your own custom chain you can subclass Chain and implement the following methods: ->: = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = self.llm.generate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} async def _acall( self, inputs: Dict[str, Any], run_manager: Optional[AsyncCallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = await self.llm.agenerate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: await run_manager.on_text("Log something about this run") return |
1,313 | something about this run") return {self.output_key: response.generations[0][0].text} @property def _chain_type(self) -> str: return "my_custom_chain"from langchain.callbacks.stdout import StdOutCallbackHandlerfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.prompts.prompt import PromptTemplatechain = MyCustomChain( prompt=PromptTemplate.from_template("tell us a joke about {topic}"), llm=ChatOpenAI(),)chain.run({"topic": "callbacks"}, callbacks=[StdOutCallbackHandler()]) > Entering new MyCustomChain chain... Log something about this run > Finished chain. 'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!'PreviousDifferent call methodsNextDebugging chainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | To implement your own custom chain you can subclass Chain and implement the following methods: | To implement your own custom chain you can subclass Chain and implement the following methods: ->: something about this run") return {self.output_key: response.generations[0][0].text} @property def _chain_type(self) -> str: return "my_custom_chain"from langchain.callbacks.stdout import StdOutCallbackHandlerfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.prompts.prompt import PromptTemplatechain = MyCustomChain( prompt=PromptTemplate.from_template("tell us a joke about {topic}"), llm=ChatOpenAI(),)chain.run({"topic": "callbacks"}, callbacks=[StdOutCallbackHandler()]) > Entering new MyCustomChain chain... Log something about this run > Finished chain. 'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!'PreviousDifferent call methodsNextDebugging chainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,314 | Async API | ü¶úÔ∏èüîó Langchain | LangChain provides async support for Chains by leveraging the asyncio library. | LangChain provides async support for Chains by leveraging the asyncio library. ->: Async API | ü¶úÔ∏èüîó Langchain |
1,315 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toAsync APIAsync APILangChain provides async support for Chains by leveraging the asyncio library.Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap.import asyncioimport timefrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaindef generate_serially(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) for _ in range(5): resp = chain.run(product="toothpaste") print(resp)async def async_generate(chain): resp = await chain.arun(product="toothpaste") print(resp)async def generate_concurrently(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) tasks = [async_generate(chain) for _ in range(5)] await asyncio.gather(*tasks)s = time.perf_counter()# If running this outside of Jupyter, use asyncio.run(generate_concurrently())await generate_concurrently()elapsed = time.perf_counter() - sprint("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")s = | LangChain provides async support for Chains by leveraging the asyncio library. | LangChain provides async support for Chains by leveraging the asyncio library. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toAsync APIAsync APILangChain provides async support for Chains by leveraging the asyncio library.Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap.import asyncioimport timefrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaindef generate_serially(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) for _ in range(5): resp = chain.run(product="toothpaste") print(resp)async def async_generate(chain): resp = await chain.arun(product="toothpaste") print(resp)async def generate_concurrently(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) tasks = [async_generate(chain) for _ in range(5)] await asyncio.gather(*tasks)s = time.perf_counter()# If running this outside of Jupyter, use asyncio.run(generate_concurrently())await generate_concurrently()elapsed = time.perf_counter() - sprint("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")s = |
1,316 | in {elapsed:0.2f} seconds." + "\033[0m")s = time.perf_counter()generate_serially()elapsed = time.perf_counter() - sprint("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m") BrightSmile Toothpaste Company BrightSmile Toothpaste Co. BrightSmile Toothpaste Gleaming Smile Inc. SparkleSmile Toothpaste Concurrent executed in 1.54 seconds. BrightSmile Toothpaste Co. MintyFresh Toothpaste Co. SparkleSmile Toothpaste. Pearly Whites Toothpaste Co. BrightSmile Toothpaste. Serial executed in 6.38 seconds.PreviousHow toNextDifferent call methodsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | LangChain provides async support for Chains by leveraging the asyncio library. | LangChain provides async support for Chains by leveraging the asyncio library. ->: in {elapsed:0.2f} seconds." + "\033[0m")s = time.perf_counter()generate_serially()elapsed = time.perf_counter() - sprint("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m") BrightSmile Toothpaste Company BrightSmile Toothpaste Co. BrightSmile Toothpaste Gleaming Smile Inc. SparkleSmile Toothpaste Concurrent executed in 1.54 seconds. BrightSmile Toothpaste Co. MintyFresh Toothpaste Co. SparkleSmile Toothpaste. Pearly Whites Toothpaste Co. BrightSmile Toothpaste. Serial executed in 6.38 seconds.PreviousHow toNextDifferent call methodsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,317 | Debugging chains | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toDebugging chainsDebugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.Setting verbose to True will print out some internal states of the Chain object while it is being ran.conversation = ConversationChain( llm=chat, memory=ConversationBufferMemory(), verbose=True)conversation.run("What is ChatGPT?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: What is ChatGPT? AI: > Finished chain. 'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'PreviousCustom chainNextLoading from LangChainHubCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | It can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing. | It can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing. ->: Debugging chains | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toDebugging chainsDebugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.Setting verbose to True will print out some internal states of the Chain object while it is being ran.conversation = ConversationChain( llm=chat, memory=ConversationBufferMemory(), verbose=True)conversation.run("What is ChatGPT?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: What is ChatGPT? AI: > Finished chain. 'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'PreviousCustom chainNextLoading from LangChainHubCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
1,318 | Serialization | ü¶úÔ∏èüîó Langchain | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. ->: Serialization | ü¶úÔ∏èüîó Langchain |
1,319 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toSerializationOn this pageSerializationThis notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.Saving a chain to disk‚ÄãFirst, let's go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a .json or .yaml extension.from langchain.prompts import PromptTemplatefrom langchain.llms import OpenAIfrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)llm_chain.save("llm_chain.json")Let's now take a look at what's inside this saved file:cat llm_chain.json { "memory": null, "verbose": true, "prompt": { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }, "llm": { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toSerializationOn this pageSerializationThis notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.Saving a chain to disk‚ÄãFirst, let's go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a .json or .yaml extension.from langchain.prompts import PromptTemplatefrom langchain.llms import OpenAIfrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)llm_chain.save("llm_chain.json")Let's now take a look at what's inside this saved file:cat llm_chain.json { "memory": null, "verbose": true, "prompt": { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }, "llm": { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, |
1,320 | "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }, "output_key": "text", "_type": "llm_chain" }Loading a chain from disk‚ÄãWe can load a chain from disk by using the load_chain method.from langchain.chains import load_chainchain = load_chain("llm_chain.json")chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4'Saving components separately‚ÄãIn the above example, we can see that the prompt and LLM configuration information is saved in the same JSON as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component.llm_chain.prompt.save("prompt.json")cat prompt.json { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }llm_chain.llm.save("llm.json")cat llm.json { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }config = { "memory": None, "verbose": True, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain",}import jsonwith open("llm_chain_separate.json", "w") as f: json.dump(config, f, indent=2)cat llm_chain_separate.json { "memory": null, "verbose": true, "prompt_path": "prompt.json", "llm_path": "llm.json", | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. ->: "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }, "output_key": "text", "_type": "llm_chain" }Loading a chain from disk‚ÄãWe can load a chain from disk by using the load_chain method.from langchain.chains import load_chainchain = load_chain("llm_chain.json")chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4'Saving components separately‚ÄãIn the above example, we can see that the prompt and LLM configuration information is saved in the same JSON as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component.llm_chain.prompt.save("prompt.json")cat prompt.json { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }llm_chain.llm.save("llm.json")cat llm.json { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }config = { "memory": None, "verbose": True, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain",}import jsonwith open("llm_chain_separate.json", "w") as f: json.dump(config, f, indent=2)cat llm_chain_separate.json { "memory": null, "verbose": true, "prompt_path": "prompt.json", "llm_path": "llm.json", |
1,321 | "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" }We can then load it in the same way:chain = load_chain("llm_chain_separate.json")chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4'PreviousUsing OpenAI functionsNextFoundationalSaving a chain to diskLoading a chain from diskSaving components separatelyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. | This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. ->: "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" }We can then load it in the same way:chain = load_chain("llm_chain_separate.json")chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4'PreviousUsing OpenAI functionsNextFoundationalSaving a chain to diskLoading a chain from diskSaving components separatelyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,322 | Using OpenAI functions | ü¶úÔ∏èüîó Langchain | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: ->: Using OpenAI functions | ü¶úÔ∏èüîó Langchain |
1,323 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toUsing OpenAI functionsOn this pageUsing OpenAI functionsThis walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: How to use functions to get structured outputs from ChatOpenAIHow to create a generic chain that uses (multiple) functionsHow to create a chain that actually executes the chosen functionfrom typing import Optionalfrom langchain.chains.openai_functions import ( create_openai_fn_chain, create_structured_output_chain,)from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplateGetting structured outputs‚ÄãWe can take advantage of OpenAI functions to try and force the model to return a particular kind of structured output. We'll use create_structured_output_chain to create our chain, which takes the desired structured output either as a Pydantic class or as JsonSchema.See here for relevant reference docs.Using Pydantic classes‚ÄãWhen passing in Pydantic classes to structure our text, we need to make sure to have a docstring description for the class. It also helps to have descriptions for each of the classes attributes.from langchain.pydantic_v1 import BaseModel, Fieldclass Person(BaseModel): """Identifying information about a person.""" name: str = Field(..., description="The person's name") age: int = Field(..., description="The person's age") fav_food: Optional[str] = Field(None, description="The person's favorite food")# If we pass in | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/‚ÄãORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toUsing OpenAI functionsOn this pageUsing OpenAI functionsThis walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: How to use functions to get structured outputs from ChatOpenAIHow to create a generic chain that uses (multiple) functionsHow to create a chain that actually executes the chosen functionfrom typing import Optionalfrom langchain.chains.openai_functions import ( create_openai_fn_chain, create_structured_output_chain,)from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplateGetting structured outputs‚ÄãWe can take advantage of OpenAI functions to try and force the model to return a particular kind of structured output. We'll use create_structured_output_chain to create our chain, which takes the desired structured output either as a Pydantic class or as JsonSchema.See here for relevant reference docs.Using Pydantic classes‚ÄãWhen passing in Pydantic classes to structure our text, we need to make sure to have a docstring description for the class. It also helps to have descriptions for each of the classes attributes.from langchain.pydantic_v1 import BaseModel, Fieldclass Person(BaseModel): """Identifying information about a person.""" name: str = Field(..., description="The person's name") age: int = Field(..., description="The person's age") fav_food: Optional[str] = Field(None, description="The person's favorite food")# If we pass in |
1,324 | person's favorite food")# If we pass in a model explicitly, we need to make sure it supports the OpenAI function-calling API.llm = ChatOpenAI(model="gpt-4", temperature=0)prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a world class algorithm for extracting information in structured formats."), ("human", "Use the given format to extract information from the following input: {input}"), ("human", "Tip: Make sure to answer in the correct format"), ])chain = create_structured_output_chain(Person, llm, prompt, verbose=True)chain.run("Sally is 13") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for extracting information in structured formats. Human: Use the given format to extract information from the following input: Sally is 13 Human: Tip: Make sure to answer in the correct format > Finished chain. Person(name='Sally', age=13, fav_food='Unknown')To extract arbitrarily many structured outputs of a given format, we can just create a wrapper Pydantic class that takes a sequence of the original class.from typing import Sequenceclass People(BaseModel): """Identifying information about all people in a text.""" people: Sequence[Person] = Field(..., description="The people in the text")chain = create_structured_output_chain(People, llm, prompt, verbose=True)chain.run( "Sally is 13, Joey just turned 12 and loves spinach. Caroline is 10 years older than Sally.") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for extracting information in structured formats. Human: Use the given format to extract information from the following input: Sally is 13, Joey just turned 12 and loves spinach. Caroline is 10 years older than Sally. Human: Tip: Make sure to answer in the correct format > Finished chain. People(people=[Person(name='Sally', age=13, fav_food=''), | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: ->: person's favorite food")# If we pass in a model explicitly, we need to make sure it supports the OpenAI function-calling API.llm = ChatOpenAI(model="gpt-4", temperature=0)prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a world class algorithm for extracting information in structured formats."), ("human", "Use the given format to extract information from the following input: {input}"), ("human", "Tip: Make sure to answer in the correct format"), ])chain = create_structured_output_chain(Person, llm, prompt, verbose=True)chain.run("Sally is 13") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for extracting information in structured formats. Human: Use the given format to extract information from the following input: Sally is 13 Human: Tip: Make sure to answer in the correct format > Finished chain. Person(name='Sally', age=13, fav_food='Unknown')To extract arbitrarily many structured outputs of a given format, we can just create a wrapper Pydantic class that takes a sequence of the original class.from typing import Sequenceclass People(BaseModel): """Identifying information about all people in a text.""" people: Sequence[Person] = Field(..., description="The people in the text")chain = create_structured_output_chain(People, llm, prompt, verbose=True)chain.run( "Sally is 13, Joey just turned 12 and loves spinach. Caroline is 10 years older than Sally.") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for extracting information in structured formats. Human: Use the given format to extract information from the following input: Sally is 13, Joey just turned 12 and loves spinach. Caroline is 10 years older than Sally. Human: Tip: Make sure to answer in the correct format > Finished chain. People(people=[Person(name='Sally', age=13, fav_food=''), |
1,325 | age=13, fav_food=''), Person(name='Joey', age=12, fav_food='spinach'), Person(name='Caroline', age=23, fav_food='')])Using JsonSchema‚ÄãWe can also pass in JsonSchema instead of Pydantic classes to specify the desired structure. When we do this, our chain will output JSON corresponding to the properties described in the JsonSchema, instead of a Pydantic class.json_schema = { "title": "Person", "description": "Identifying information about a person.", "type": "object", "properties": { "name": {"title": "Name", "description": "The person's name", "type": "string"}, "age": {"title": "Age", "description": "The person's age", "type": "integer"}, "fav_food": { "title": "Fav Food", "description": "The person's favorite food", "type": "string", }, }, "required": ["name", "age"],}chain = create_structured_output_chain(json_schema, llm, prompt, verbose=True)chain.run("Sally is 13") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for extracting information in structured formats. Human: Use the given format to extract information from the following input: Sally is 13 Human: Tip: Make sure to answer in the correct format > Finished chain. {'name': 'Sally', 'age': 13}Creating a generic OpenAI functions chain‚ÄãTo create a generic OpenAI functions chain, we can use the create_openai_fn_chain method. This is the same as create_structured_output_chain except that instead of taking a single output schema, it takes a sequence of function definitions.Functions can be passed in as:dicts conforming to OpenAI functions spec,Pydantic classes, in which case they should have docstring descriptions of the function they represent and descriptions for each of the parameters,Python functions, in which case they should have docstring descriptions of the function and args, along with type hints.See here for relevant reference docs.Using | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: ->: age=13, fav_food=''), Person(name='Joey', age=12, fav_food='spinach'), Person(name='Caroline', age=23, fav_food='')])Using JsonSchema‚ÄãWe can also pass in JsonSchema instead of Pydantic classes to specify the desired structure. When we do this, our chain will output JSON corresponding to the properties described in the JsonSchema, instead of a Pydantic class.json_schema = { "title": "Person", "description": "Identifying information about a person.", "type": "object", "properties": { "name": {"title": "Name", "description": "The person's name", "type": "string"}, "age": {"title": "Age", "description": "The person's age", "type": "integer"}, "fav_food": { "title": "Fav Food", "description": "The person's favorite food", "type": "string", }, }, "required": ["name", "age"],}chain = create_structured_output_chain(json_schema, llm, prompt, verbose=True)chain.run("Sally is 13") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for extracting information in structured formats. Human: Use the given format to extract information from the following input: Sally is 13 Human: Tip: Make sure to answer in the correct format > Finished chain. {'name': 'Sally', 'age': 13}Creating a generic OpenAI functions chain‚ÄãTo create a generic OpenAI functions chain, we can use the create_openai_fn_chain method. This is the same as create_structured_output_chain except that instead of taking a single output schema, it takes a sequence of function definitions.Functions can be passed in as:dicts conforming to OpenAI functions spec,Pydantic classes, in which case they should have docstring descriptions of the function they represent and descriptions for each of the parameters,Python functions, in which case they should have docstring descriptions of the function and args, along with type hints.See here for relevant reference docs.Using |
1,326 | hints.See here for relevant reference docs.Using Pydantic classes‚Äãclass RecordPerson(BaseModel): """Record some identifying information about a pe.""" name: str = Field(..., description="The person's name") age: int = Field(..., description="The person's age") fav_food: Optional[str] = Field(None, description="The person's favorite food")class RecordDog(BaseModel): """Record some identifying information about a dog.""" name: str = Field(..., description="The dog's name") color: str = Field(..., description="The dog's color") fav_food: Optional[str] = Field(None, description="The dog's favorite food")prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a world class algorithm for recording entities."), ("human", "Make calls to the relevant function to record the entities in the following input: {input}"), ("human", "Tip: Make sure to answer in the correct format"), ])chain = create_openai_fn_chain([RecordPerson, RecordDog], llm, prompt, verbose=True)chain.run("Harry was a chubby brown beagle who loved chicken") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for recording entities. Human: Make calls to the relevant function to record the entities in the following input: Harry was a chubby brown beagle who loved chicken Human: Tip: Make sure to answer in the correct format > Finished chain. RecordDog(name='Harry', color='brown', fav_food='chicken')Using Python functions‚ÄãWe can pass in functions as Pydantic classes, directly as OpenAI function dicts, or Python functions. To pass Python function in directly, we'll want to make sure our parameters have type hints, we have a docstring, and we use Google Python style docstrings to describe the parameters.NOTE: To use Python functions, make sure the function arguments are of primitive types (str, float, int, bool) or that they are Pydantic objects.class | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: ->: hints.See here for relevant reference docs.Using Pydantic classes‚Äãclass RecordPerson(BaseModel): """Record some identifying information about a pe.""" name: str = Field(..., description="The person's name") age: int = Field(..., description="The person's age") fav_food: Optional[str] = Field(None, description="The person's favorite food")class RecordDog(BaseModel): """Record some identifying information about a dog.""" name: str = Field(..., description="The dog's name") color: str = Field(..., description="The dog's color") fav_food: Optional[str] = Field(None, description="The dog's favorite food")prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a world class algorithm for recording entities."), ("human", "Make calls to the relevant function to record the entities in the following input: {input}"), ("human", "Tip: Make sure to answer in the correct format"), ])chain = create_openai_fn_chain([RecordPerson, RecordDog], llm, prompt, verbose=True)chain.run("Harry was a chubby brown beagle who loved chicken") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for recording entities. Human: Make calls to the relevant function to record the entities in the following input: Harry was a chubby brown beagle who loved chicken Human: Tip: Make sure to answer in the correct format > Finished chain. RecordDog(name='Harry', color='brown', fav_food='chicken')Using Python functions‚ÄãWe can pass in functions as Pydantic classes, directly as OpenAI function dicts, or Python functions. To pass Python function in directly, we'll want to make sure our parameters have type hints, we have a docstring, and we use Google Python style docstrings to describe the parameters.NOTE: To use Python functions, make sure the function arguments are of primitive types (str, float, int, bool) or that they are Pydantic objects.class |
1,327 | bool) or that they are Pydantic objects.class OptionalFavFood(BaseModel): """Either a food or null.""" food: Optional[str] = Field( None, description="Either the name of a food or null. Should be null if the food isn't known.", )def record_person(name: str, age: int, fav_food: OptionalFavFood) -> str: """Record some basic identifying information about a person. Args: name: The person's name. age: The person's age in years. fav_food: An OptionalFavFood object that either contains the person's favorite food or a null value. Food should be null if it's not known. """ return f"Recording person {name} of age {age} with favorite food {fav_food.food}!"chain = create_openai_fn_chain([record_person], llm, prompt, verbose=True)chain.run( "The most important thing to remember about Tommy, my 12 year old, is that he'll do anything for apple pie.") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for recording entities. Human: Make calls to the relevant function to record the entities in the following input: The most important thing to remember about Tommy, my 12 year old, is that he'll do anything for apple pie. Human: Tip: Make sure to answer in the correct format > Finished chain. {'name': 'Tommy', 'age': 12, 'fav_food': {'food': 'apple pie'}}If we pass in multiple Python functions or OpenAI functions, then the returned output will be of the form:{"name": "<<function_name>>", "arguments": {<<function_arguments>>}}def record_dog(name: str, color: str, fav_food: OptionalFavFood) -> str: """Record some basic identifying information about a dog. Args: name: The dog's name. color: The dog's color. fav_food: An OptionalFavFood object that either contains the dog's favorite food or a null value. Food should be null if it's not known. """ return f"Recording dog {name} of color {color} with favorite food | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: ->: bool) or that they are Pydantic objects.class OptionalFavFood(BaseModel): """Either a food or null.""" food: Optional[str] = Field( None, description="Either the name of a food or null. Should be null if the food isn't known.", )def record_person(name: str, age: int, fav_food: OptionalFavFood) -> str: """Record some basic identifying information about a person. Args: name: The person's name. age: The person's age in years. fav_food: An OptionalFavFood object that either contains the person's favorite food or a null value. Food should be null if it's not known. """ return f"Recording person {name} of age {age} with favorite food {fav_food.food}!"chain = create_openai_fn_chain([record_person], llm, prompt, verbose=True)chain.run( "The most important thing to remember about Tommy, my 12 year old, is that he'll do anything for apple pie.") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for recording entities. Human: Make calls to the relevant function to record the entities in the following input: The most important thing to remember about Tommy, my 12 year old, is that he'll do anything for apple pie. Human: Tip: Make sure to answer in the correct format > Finished chain. {'name': 'Tommy', 'age': 12, 'fav_food': {'food': 'apple pie'}}If we pass in multiple Python functions or OpenAI functions, then the returned output will be of the form:{"name": "<<function_name>>", "arguments": {<<function_arguments>>}}def record_dog(name: str, color: str, fav_food: OptionalFavFood) -> str: """Record some basic identifying information about a dog. Args: name: The dog's name. color: The dog's color. fav_food: An OptionalFavFood object that either contains the dog's favorite food or a null value. Food should be null if it's not known. """ return f"Recording dog {name} of color {color} with favorite food |
1,328 | dog {name} of color {color} with favorite food {fav_food}!"chain = create_openai_fn_chain([record_person, record_dog], llm, prompt, verbose=True)chain.run( "I can't find my dog Henry anywhere, he's a small brown beagle. Could you send a message about him?") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for recording entities. Human: Make calls to the relevant function to record the entities in the following input: I can't find my dog Henry anywhere, he's a small brown beagle. Could you send a message about him? Human: Tip: Make sure to answer in the correct format > Finished chain. {'name': 'record_dog', 'arguments': {'name': 'Henry', 'color': 'brown', 'fav_food': {'food': None}}}Other Chains using OpenAI functions​There are a number of more specific chains that use OpenAI functions.Extraction: very similar to structured output chain, intended for information/entity extraction specifically.Tagging: tag inputs.OpenAPI: take an OpenAPI spec and create + execute valid requests against the API, using OpenAI functions under the hood.QA with citations: use OpenAI functions ability to extract citations from text.PreviousAdding memory (state)NextSerializationGetting structured outputsUsing Pydantic classesUsing JsonSchemaCreating a generic OpenAI functions chainUsing Pydantic classesUsing Python functionsOther Chains using OpenAI functionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: | This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: ->: dog {name} of color {color} with favorite food {fav_food}!"chain = create_openai_fn_chain([record_person, record_dog], llm, prompt, verbose=True)chain.run( "I can't find my dog Henry anywhere, he's a small brown beagle. Could you send a message about him?") > Entering new LLMChain chain... Prompt after formatting: System: You are a world class algorithm for recording entities. Human: Make calls to the relevant function to record the entities in the following input: I can't find my dog Henry anywhere, he's a small brown beagle. Could you send a message about him? Human: Tip: Make sure to answer in the correct format > Finished chain. {'name': 'record_dog', 'arguments': {'name': 'Henry', 'color': 'brown', 'fav_food': {'food': None}}}Other Chains using OpenAI functions​There are a number of more specific chains that use OpenAI functions.Extraction: very similar to structured output chain, intended for information/entity extraction specifically.Tagging: tag inputs.OpenAPI: take an OpenAPI spec and create + execute valid requests against the API, using OpenAI functions under the hood.QA with citations: use OpenAI functions ability to extract citations from text.PreviousAdding memory (state)NextSerializationGetting structured outputsUsing Pydantic classesUsing JsonSchemaCreating a generic OpenAI functions chainUsing Pydantic classesUsing Python functionsOther Chains using OpenAI functionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,329 | Tagging | ü¶úÔ∏èüîó Langchain | Open In Colab | Open In Colab ->: Tagging | ü¶úÔ∏èüîó Langchain |
1,330 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingTaggingOn this pageTaggingUse case‚ÄãTagging means labeling a document with classes such as:sentimentlanguagestyle (formal, informal etc.)covered topicspolitical tendencyOverview‚ÄãTagging has a few components:function: Like extraction, tagging uses functions to specify how the model should tag a documentschema: defines how we want to tag the documentQuickstart‚ÄãLet's see a very straightforward example of how we can use OpenAI functions for tagging in LangChain.pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.chains import create_tagging_chain, create_tagging_chain_pydanticWe specify a few properties with their expected type in our schema.# Schemaschema = { "properties": { "sentiment": {"type": "string"}, "aggressiveness": {"type": "integer"}, "language": {"type": "string"}, }}# LLMllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")chain = create_tagging_chain(schema, llm)inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'sentiment': 'positive', 'language': 'Spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) {'sentiment': 'enojado', 'aggressiveness': 1, 'language': 'es'}As we can see in the examples, it correctly interprets what we want.The results vary so that we get, for example, sentiments in different languages ('positive', 'enojado' etc.).We will see how to control these results in the next section.Finer control‚ÄãCareful schema definition gives us more control over | Open In Colab | Open In Colab ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingTaggingOn this pageTaggingUse case‚ÄãTagging means labeling a document with classes such as:sentimentlanguagestyle (formal, informal etc.)covered topicspolitical tendencyOverview‚ÄãTagging has a few components:function: Like extraction, tagging uses functions to specify how the model should tag a documentschema: defines how we want to tag the documentQuickstart‚ÄãLet's see a very straightforward example of how we can use OpenAI functions for tagging in LangChain.pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.chains import create_tagging_chain, create_tagging_chain_pydanticWe specify a few properties with their expected type in our schema.# Schemaschema = { "properties": { "sentiment": {"type": "string"}, "aggressiveness": {"type": "integer"}, "language": {"type": "string"}, }}# LLMllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")chain = create_tagging_chain(schema, llm)inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'sentiment': 'positive', 'language': 'Spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) {'sentiment': 'enojado', 'aggressiveness': 1, 'language': 'es'}As we can see in the examples, it correctly interprets what we want.The results vary so that we get, for example, sentiments in different languages ('positive', 'enojado' etc.).We will see how to control these results in the next section.Finer control‚ÄãCareful schema definition gives us more control over |
1,331 | schema definition gives us more control over the model's output. Specifically, we can define:possible values for each propertydescription to make sure that the model understands the propertyrequired properties to be returnedHere is an example of how we can use _enum_, _description_, and _required_ to control for each of the previously mentioned aspects:schema = { "properties": { "aggressiveness": { "type": "integer", "enum": [1, 2, 3, 4, 5], "description": "describes how aggressive the statement is, the higher the number the more aggressive", }, "language": { "type": "string", "enum": ["spanish", "english", "french", "german", "italian"], }, }, "required": ["language", "sentiment", "aggressiveness"],}chain = create_tagging_chain(schema, llm)Now the answers are much better!inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'aggressiveness': 0, 'language': 'spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) {'aggressiveness': 5, 'language': 'spanish'}inp = "Weather is ok here, I can go outside without much more than a coat"chain.run(inp) {'aggressiveness': 0, 'language': 'english'}The LangSmith trace lets us peek under the hood:As with extraction, we call the information_extraction function here on the input string.This OpenAI function extraction information based upon the provided schema.Pydantic‚ÄãWe can also use a Pydantic schema to specify the required properties and types. We can also send other arguments, such as enum or description, to each field.This lets us specify our schema in the same manner that we would a new class or function in Python with purely Pythonic types.from enum import Enumfrom pydantic import BaseModel, Fieldclass Tags(BaseModel): sentiment: str = Field(..., enum=["happy", "neutral", "sad"]) aggressiveness: int = Field( ..., | Open In Colab | Open In Colab ->: schema definition gives us more control over the model's output. Specifically, we can define:possible values for each propertydescription to make sure that the model understands the propertyrequired properties to be returnedHere is an example of how we can use _enum_, _description_, and _required_ to control for each of the previously mentioned aspects:schema = { "properties": { "aggressiveness": { "type": "integer", "enum": [1, 2, 3, 4, 5], "description": "describes how aggressive the statement is, the higher the number the more aggressive", }, "language": { "type": "string", "enum": ["spanish", "english", "french", "german", "italian"], }, }, "required": ["language", "sentiment", "aggressiveness"],}chain = create_tagging_chain(schema, llm)Now the answers are much better!inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'aggressiveness': 0, 'language': 'spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) {'aggressiveness': 5, 'language': 'spanish'}inp = "Weather is ok here, I can go outside without much more than a coat"chain.run(inp) {'aggressiveness': 0, 'language': 'english'}The LangSmith trace lets us peek under the hood:As with extraction, we call the information_extraction function here on the input string.This OpenAI function extraction information based upon the provided schema.Pydantic‚ÄãWe can also use a Pydantic schema to specify the required properties and types. We can also send other arguments, such as enum or description, to each field.This lets us specify our schema in the same manner that we would a new class or function in Python with purely Pythonic types.from enum import Enumfrom pydantic import BaseModel, Fieldclass Tags(BaseModel): sentiment: str = Field(..., enum=["happy", "neutral", "sad"]) aggressiveness: int = Field( ..., |
1,332 | aggressiveness: int = Field( ..., description="describes how aggressive the statement is, the higher the number the more aggressive", enum=[1, 2, 3, 4, 5], ) language: str = Field( ..., enum=["spanish", "english", "french", "german", "italian"] )chain = create_tagging_chain_pydantic(Tags, llm)inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"res = chain.run(inp)res Tags(sentiment='sad', aggressiveness=5, language='spanish')Going deeper​You can use the metadata tagger document transformer to extract metadata from a LangChain Document. This covers the same basic functionality as the tagging chain, only applied to a LangChain Document.PreviousSummarizationNextWeb scrapingUse caseOverviewQuickstartFiner controlPydanticGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Open In Colab | Open In Colab ->: aggressiveness: int = Field( ..., description="describes how aggressive the statement is, the higher the number the more aggressive", enum=[1, 2, 3, 4, 5], ) language: str = Field( ..., enum=["spanish", "english", "french", "german", "italian"] )chain = create_tagging_chain_pydantic(Tags, llm)inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"res = chain.run(inp)res Tags(sentiment='sad', aggressiveness=5, language='spanish')Going deeper​You can use the metadata tagger document transformer to extract metadata from a LangChain Document. This covers the same basic functionality as the tagging chain, only applied to a LangChain Document.PreviousSummarizationNextWeb scrapingUse caseOverviewQuickstartFiner controlPydanticGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,333 | Synthetic data generation | ü¶úÔ∏èüîó Langchain | Open In Colab | Open In Colab ->: Synthetic data generation | ü¶úÔ∏èüîó Langchain |
1,334 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingSynthetic data generationOn this pageSynthetic data generationUse case‚ÄãSynthetic data is artificially generated data, rather than data collected from real-world events. It's used to simulate real data without compromising privacy or encountering real-world limitations. Benefits of Synthetic Data:Privacy and Security: No real personal data at risk of breaches.Data Augmentation: Expands datasets for machine learning.Flexibility: Create specific or rare scenarios.Cost-effective: Often cheaper than real-world data collection.Regulatory Compliance: Helps navigate strict data protection laws.Model Robustness: Can lead to better generalizing AI models.Rapid Prototyping: Enables quick testing without real data.Controlled Experimentation: Simulate specific conditions.Access to Data: Alternative when real data isn't available.Note: Despite the benefits, synthetic data should be used carefully, as it may not always capture real-world complexities.Quickstart‚ÄãIn this notebook, we'll dive deep into generating synthetic medical billing records using the langchain library. This tool is particularly useful when you want to develop or test algorithms but don't want to use real patient data due to privacy concerns or data availability issues.Setup‚ÄãFirst, you'll need to have the langchain library installed, along with its dependencies. Since we're using the OpenAI generator chain, we'll install that as well. Since this is an experimental lib, we'll need to include langchain_experimental in our installs. We'll then import the necessary modules.pip install -U langchain langchain_experimental openai# Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from | Open In Colab | Open In Colab ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingSynthetic data generationOn this pageSynthetic data generationUse case‚ÄãSynthetic data is artificially generated data, rather than data collected from real-world events. It's used to simulate real data without compromising privacy or encountering real-world limitations. Benefits of Synthetic Data:Privacy and Security: No real personal data at risk of breaches.Data Augmentation: Expands datasets for machine learning.Flexibility: Create specific or rare scenarios.Cost-effective: Often cheaper than real-world data collection.Regulatory Compliance: Helps navigate strict data protection laws.Model Robustness: Can lead to better generalizing AI models.Rapid Prototyping: Enables quick testing without real data.Controlled Experimentation: Simulate specific conditions.Access to Data: Alternative when real data isn't available.Note: Despite the benefits, synthetic data should be used carefully, as it may not always capture real-world complexities.Quickstart‚ÄãIn this notebook, we'll dive deep into generating synthetic medical billing records using the langchain library. This tool is particularly useful when you want to develop or test algorithms but don't want to use real patient data due to privacy concerns or data availability issues.Setup‚ÄãFirst, you'll need to have the langchain library installed, along with its dependencies. Since we're using the OpenAI generator chain, we'll install that as well. Since this is an experimental lib, we'll need to include langchain_experimental in our installs. We'll then import the necessary modules.pip install -U langchain langchain_experimental openai# Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from |
1,335 | file:# import dotenv# dotenv.load_dotenv()from langchain.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain.pydantic_v1 import BaseModelfrom langchain_experimental.tabular_synthetic_data.base import SyntheticDataGeneratorfrom langchain_experimental.tabular_synthetic_data.openai import create_openai_data_generator, OPENAI_TEMPLATEfrom langchain_experimental.tabular_synthetic_data.prompts import SYNTHETIC_FEW_SHOT_SUFFIX, SYNTHETIC_FEW_SHOT_PREFIX1. Define Your Data Model‚ÄãEvery dataset has a structure or a "schema". The MedicalBilling class below serves as our schema for the synthetic data. By defining this, we're informing our synthetic data generator about the shape and nature of data we expect.class MedicalBilling(BaseModel): patient_id: int patient_name: str diagnosis_code: str procedure_code: str total_charge: float insurance_claim_amount: floatFor instance, every record will have a patient_id that's an integer, a patient_name that's a string, and so on.2. Sample Data‚ÄãTo guide the synthetic data generator, it's useful to provide it with a few real-world-like examples. These examples serve as a "seed" - they're representative of the kind of data you want, and the generator will use them to create more data that looks similar.Here are some fictional medical billing records:examples = [ {"example": """Patient ID: 123456, Patient Name: John Doe, Diagnosis Code: J20.9, Procedure Code: 99203, Total Charge: $500, Insurance Claim Amount: $350"""}, {"example": """Patient ID: 789012, Patient Name: Johnson Smith, Diagnosis Code: M54.5, Procedure Code: 99213, Total Charge: $150, Insurance Claim Amount: $120"""}, {"example": """Patient ID: 345678, Patient Name: Emily Stone, Diagnosis Code: E11.9, Procedure Code: 99214, Total Charge: $300, Insurance Claim Amount: $250"""},]3. Craft a Prompt Template‚ÄãThe generator doesn't magically know how to create our data; | Open In Colab | Open In Colab ->: file:# import dotenv# dotenv.load_dotenv()from langchain.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain.pydantic_v1 import BaseModelfrom langchain_experimental.tabular_synthetic_data.base import SyntheticDataGeneratorfrom langchain_experimental.tabular_synthetic_data.openai import create_openai_data_generator, OPENAI_TEMPLATEfrom langchain_experimental.tabular_synthetic_data.prompts import SYNTHETIC_FEW_SHOT_SUFFIX, SYNTHETIC_FEW_SHOT_PREFIX1. Define Your Data Model‚ÄãEvery dataset has a structure or a "schema". The MedicalBilling class below serves as our schema for the synthetic data. By defining this, we're informing our synthetic data generator about the shape and nature of data we expect.class MedicalBilling(BaseModel): patient_id: int patient_name: str diagnosis_code: str procedure_code: str total_charge: float insurance_claim_amount: floatFor instance, every record will have a patient_id that's an integer, a patient_name that's a string, and so on.2. Sample Data‚ÄãTo guide the synthetic data generator, it's useful to provide it with a few real-world-like examples. These examples serve as a "seed" - they're representative of the kind of data you want, and the generator will use them to create more data that looks similar.Here are some fictional medical billing records:examples = [ {"example": """Patient ID: 123456, Patient Name: John Doe, Diagnosis Code: J20.9, Procedure Code: 99203, Total Charge: $500, Insurance Claim Amount: $350"""}, {"example": """Patient ID: 789012, Patient Name: Johnson Smith, Diagnosis Code: M54.5, Procedure Code: 99213, Total Charge: $150, Insurance Claim Amount: $120"""}, {"example": """Patient ID: 345678, Patient Name: Emily Stone, Diagnosis Code: E11.9, Procedure Code: 99214, Total Charge: $300, Insurance Claim Amount: $250"""},]3. Craft a Prompt Template‚ÄãThe generator doesn't magically know how to create our data; |
1,336 | doesn't magically know how to create our data; we need to guide it. We do this by creating a prompt template. This template helps instruct the underlying language model on how to produce synthetic data in the desired format.OPENAI_TEMPLATE = PromptTemplate(input_variables=["example"], template="{example}")prompt_template = FewShotPromptTemplate( prefix=SYNTHETIC_FEW_SHOT_PREFIX, examples=examples, suffix=SYNTHETIC_FEW_SHOT_SUFFIX, input_variables=["subject", "extra"], example_prompt=OPENAI_TEMPLATE,)The FewShotPromptTemplate includes:prefix and suffix: These likely contain guiding context or instructions.examples: The sample data we defined earlier.input_variables: These variables ("subject", "extra") are placeholders you can dynamically fill later. For instance, "subject" might be filled with "medical_billing" to guide the model further.example_prompt: This prompt template is the format we want each example row to take in our prompt.4. Creating the Data Generator‚ÄãWith the schema and the prompt ready, the next step is to create the data generator. This object knows how to communicate with the underlying language model to get synthetic data.synthetic_data_generator = create_openai_data_generator( output_schema=MedicalBilling, llm=ChatOpenAI(temperature=1), # You'll need to replace with your actual Language Model instance prompt=prompt_template,)5. Generate Synthetic Data‚ÄãFinally, let's get our synthetic data!synthetic_results = synthetic_data_generator.generate( subject="medical_billing", extra="the name must be chosen at random. Make it something you wouldn't normally choose.", runs=10,)This command asks the generator to produce 10 synthetic medical billing records. The results are stored in synthetic_results. The output will be a list of the MedicalBilling pydantic models.Other implementations‚Äãfrom langchain.chat_models import ChatOpenAIfrom langchain_experimental.synthetic_data import create_data_generation_chain, | Open In Colab | Open In Colab ->: doesn't magically know how to create our data; we need to guide it. We do this by creating a prompt template. This template helps instruct the underlying language model on how to produce synthetic data in the desired format.OPENAI_TEMPLATE = PromptTemplate(input_variables=["example"], template="{example}")prompt_template = FewShotPromptTemplate( prefix=SYNTHETIC_FEW_SHOT_PREFIX, examples=examples, suffix=SYNTHETIC_FEW_SHOT_SUFFIX, input_variables=["subject", "extra"], example_prompt=OPENAI_TEMPLATE,)The FewShotPromptTemplate includes:prefix and suffix: These likely contain guiding context or instructions.examples: The sample data we defined earlier.input_variables: These variables ("subject", "extra") are placeholders you can dynamically fill later. For instance, "subject" might be filled with "medical_billing" to guide the model further.example_prompt: This prompt template is the format we want each example row to take in our prompt.4. Creating the Data Generator‚ÄãWith the schema and the prompt ready, the next step is to create the data generator. This object knows how to communicate with the underlying language model to get synthetic data.synthetic_data_generator = create_openai_data_generator( output_schema=MedicalBilling, llm=ChatOpenAI(temperature=1), # You'll need to replace with your actual Language Model instance prompt=prompt_template,)5. Generate Synthetic Data‚ÄãFinally, let's get our synthetic data!synthetic_results = synthetic_data_generator.generate( subject="medical_billing", extra="the name must be chosen at random. Make it something you wouldn't normally choose.", runs=10,)This command asks the generator to produce 10 synthetic medical billing records. The results are stored in synthetic_results. The output will be a list of the MedicalBilling pydantic models.Other implementations‚Äãfrom langchain.chat_models import ChatOpenAIfrom langchain_experimental.synthetic_data import create_data_generation_chain, |
1,337 | import create_data_generation_chain, DatasetGenerator# LLMmodel = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.7)chain = create_data_generation_chain(model)chain({"fields": ["blue", "yellow"], "preferences": {}}) {'fields': ['blue', 'yellow'], 'preferences': {}, 'text': 'The vibrant blue sky contrasted beautifully with the bright yellow sun, creating a stunning display of colors that instantly lifted the spirits of all who gazed upon it.'}chain({"fields": {"colors": ["blue", "yellow"]}, "preferences": {"style": "Make it in a style of a weather forecast."}}) {'fields': {'colors': ['blue', 'yellow']}, 'preferences': {'style': 'Make it in a style of a weather forecast.'}, 'text': "Good morning! Today's weather forecast brings a beautiful combination of colors to the sky, with hues of blue and yellow gently blending together like a mesmerizing painting."}chain({"fields": {"actor": "Tom Hanks", "movies": ["Forrest Gump", "Green Mile"]}, "preferences": None}) {'fields': {'actor': 'Tom Hanks', 'movies': ['Forrest Gump', 'Green Mile']}, 'preferences': None, 'text': 'Tom Hanks, the renowned actor known for his incredible versatility and charm, has graced the silver screen in unforgettable movies such as "Forrest Gump" and "Green Mile".'}chain( { "fields": [ {"actor": "Tom Hanks", "movies": ["Forrest Gump", "Green Mile"]}, {"actor": "Mads Mikkelsen", "movies": ["Hannibal", "Another round"]} ], "preferences": {"minimum_length": 200, "style": "gossip"} }) {'fields': [{'actor': 'Tom Hanks', 'movies': ['Forrest Gump', 'Green Mile']}, {'actor': 'Mads Mikkelsen', 'movies': ['Hannibal', 'Another round']}], 'preferences': {'minimum_length': 200, 'style': 'gossip'}, 'text': 'Did you know that Tom Hanks, the beloved Hollywood actor known for his roles in "Forrest Gump" and "Green Mile", has shared the screen with the talented Mads Mikkelsen, who gained international acclaim for | Open In Colab | Open In Colab ->: import create_data_generation_chain, DatasetGenerator# LLMmodel = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.7)chain = create_data_generation_chain(model)chain({"fields": ["blue", "yellow"], "preferences": {}}) {'fields': ['blue', 'yellow'], 'preferences': {}, 'text': 'The vibrant blue sky contrasted beautifully with the bright yellow sun, creating a stunning display of colors that instantly lifted the spirits of all who gazed upon it.'}chain({"fields": {"colors": ["blue", "yellow"]}, "preferences": {"style": "Make it in a style of a weather forecast."}}) {'fields': {'colors': ['blue', 'yellow']}, 'preferences': {'style': 'Make it in a style of a weather forecast.'}, 'text': "Good morning! Today's weather forecast brings a beautiful combination of colors to the sky, with hues of blue and yellow gently blending together like a mesmerizing painting."}chain({"fields": {"actor": "Tom Hanks", "movies": ["Forrest Gump", "Green Mile"]}, "preferences": None}) {'fields': {'actor': 'Tom Hanks', 'movies': ['Forrest Gump', 'Green Mile']}, 'preferences': None, 'text': 'Tom Hanks, the renowned actor known for his incredible versatility and charm, has graced the silver screen in unforgettable movies such as "Forrest Gump" and "Green Mile".'}chain( { "fields": [ {"actor": "Tom Hanks", "movies": ["Forrest Gump", "Green Mile"]}, {"actor": "Mads Mikkelsen", "movies": ["Hannibal", "Another round"]} ], "preferences": {"minimum_length": 200, "style": "gossip"} }) {'fields': [{'actor': 'Tom Hanks', 'movies': ['Forrest Gump', 'Green Mile']}, {'actor': 'Mads Mikkelsen', 'movies': ['Hannibal', 'Another round']}], 'preferences': {'minimum_length': 200, 'style': 'gossip'}, 'text': 'Did you know that Tom Hanks, the beloved Hollywood actor known for his roles in "Forrest Gump" and "Green Mile", has shared the screen with the talented Mads Mikkelsen, who gained international acclaim for |
1,338 | Mikkelsen, who gained international acclaim for his performances in "Hannibal" and "Another round"? These two incredible actors have brought their exceptional skills and captivating charisma to the big screen, delivering unforgettable performances that have enthralled audiences around the world. Whether it\'s Hanks\' endearing portrayal of Forrest Gump or Mikkelsen\'s chilling depiction of Hannibal Lecter, these movies have solidified their places in cinematic history, leaving a lasting impact on viewers and cementing their status as true icons of the silver screen.'}As we can see created examples are diversified and possess information we wanted them to have. Also, their style reflects the given preferences quite well.Generating exemplary dataset for extraction benchmarking purposes‚Äãinp = [ { 'Actor': 'Tom Hanks', 'Film': [ 'Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can'] }, { 'Actor': 'Tom Hardy', 'Film': [ 'Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk' ] }]generator = DatasetGenerator(model, {"style": "informal", "minimal length": 500})dataset = generator(inp)dataset [{'fields': {'Actor': 'Tom Hanks', 'Film': ['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can']}, 'preferences': {'style': 'informal', 'minimal length': 500}, 'text': 'Tom Hanks, the versatile and charismatic actor, has graced the silver screen in numerous iconic films including the heartwarming and inspirational "Forrest Gump," the intense and gripping war drama "Saving Private Ryan," the emotionally charged and thought-provoking "The Green Mile," the beloved animated classic "Toy Story," and the thrilling and captivating true story adaptation "Catch Me If You Can." With his | Open In Colab | Open In Colab ->: Mikkelsen, who gained international acclaim for his performances in "Hannibal" and "Another round"? These two incredible actors have brought their exceptional skills and captivating charisma to the big screen, delivering unforgettable performances that have enthralled audiences around the world. Whether it\'s Hanks\' endearing portrayal of Forrest Gump or Mikkelsen\'s chilling depiction of Hannibal Lecter, these movies have solidified their places in cinematic history, leaving a lasting impact on viewers and cementing their status as true icons of the silver screen.'}As we can see created examples are diversified and possess information we wanted them to have. Also, their style reflects the given preferences quite well.Generating exemplary dataset for extraction benchmarking purposes‚Äãinp = [ { 'Actor': 'Tom Hanks', 'Film': [ 'Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can'] }, { 'Actor': 'Tom Hardy', 'Film': [ 'Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk' ] }]generator = DatasetGenerator(model, {"style": "informal", "minimal length": 500})dataset = generator(inp)dataset [{'fields': {'Actor': 'Tom Hanks', 'Film': ['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can']}, 'preferences': {'style': 'informal', 'minimal length': 500}, 'text': 'Tom Hanks, the versatile and charismatic actor, has graced the silver screen in numerous iconic films including the heartwarming and inspirational "Forrest Gump," the intense and gripping war drama "Saving Private Ryan," the emotionally charged and thought-provoking "The Green Mile," the beloved animated classic "Toy Story," and the thrilling and captivating true story adaptation "Catch Me If You Can." With his |
1,339 | story adaptation "Catch Me If You Can." With his impressive range and genuine talent, Hanks continues to captivate audiences worldwide, leaving an indelible mark on the world of cinema.'}, {'fields': {'Actor': 'Tom Hardy', 'Film': ['Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk']}, 'preferences': {'style': 'informal', 'minimal length': 500}, 'text': 'Tom Hardy, the versatile actor known for his intense performances, has graced the silver screen in numerous iconic films, including "Inception," "The Dark Knight Rises," "Mad Max: Fury Road," "The Revenant," and "Dunkirk." Whether he\'s delving into the depths of the subconscious mind, donning the mask of the infamous Bane, or navigating the treacherous wasteland as the enigmatic Max Rockatansky, Hardy\'s commitment to his craft is always evident. From his breathtaking portrayal of the ruthless Eames in "Inception" to his captivating transformation into the ferocious Max in "Mad Max: Fury Road," Hardy\'s dynamic range and magnetic presence captivate audiences and leave an indelible mark on the world of cinema. In his most physically demanding role to date, he endured the harsh conditions of the freezing wilderness as he portrayed the rugged frontiersman John Fitzgerald in "The Revenant," earning him critical acclaim and an Academy Award nomination. In Christopher Nolan\'s war epic "Dunkirk," Hardy\'s stoic and heroic portrayal of Royal Air Force pilot Farrier showcases his ability to convey deep emotion through nuanced performances. With his chameleon-like ability to inhabit a wide range of characters and his unwavering commitment to his craft, Tom Hardy has undoubtedly solidified his place as one of the most talented and sought-after actors of his generation.'}]Extraction from generated examples‚ÄãOkay, let's see if we can now extract output from this generated data and how it compares with our case!from langchain.llms import | Open In Colab | Open In Colab ->: story adaptation "Catch Me If You Can." With his impressive range and genuine talent, Hanks continues to captivate audiences worldwide, leaving an indelible mark on the world of cinema.'}, {'fields': {'Actor': 'Tom Hardy', 'Film': ['Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk']}, 'preferences': {'style': 'informal', 'minimal length': 500}, 'text': 'Tom Hardy, the versatile actor known for his intense performances, has graced the silver screen in numerous iconic films, including "Inception," "The Dark Knight Rises," "Mad Max: Fury Road," "The Revenant," and "Dunkirk." Whether he\'s delving into the depths of the subconscious mind, donning the mask of the infamous Bane, or navigating the treacherous wasteland as the enigmatic Max Rockatansky, Hardy\'s commitment to his craft is always evident. From his breathtaking portrayal of the ruthless Eames in "Inception" to his captivating transformation into the ferocious Max in "Mad Max: Fury Road," Hardy\'s dynamic range and magnetic presence captivate audiences and leave an indelible mark on the world of cinema. In his most physically demanding role to date, he endured the harsh conditions of the freezing wilderness as he portrayed the rugged frontiersman John Fitzgerald in "The Revenant," earning him critical acclaim and an Academy Award nomination. In Christopher Nolan\'s war epic "Dunkirk," Hardy\'s stoic and heroic portrayal of Royal Air Force pilot Farrier showcases his ability to convey deep emotion through nuanced performances. With his chameleon-like ability to inhabit a wide range of characters and his unwavering commitment to his craft, Tom Hardy has undoubtedly solidified his place as one of the most talented and sought-after actors of his generation.'}]Extraction from generated examples‚ÄãOkay, let's see if we can now extract output from this generated data and how it compares with our case!from langchain.llms import |
1,340 | compares with our case!from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.output_parsers import PydanticOutputParserfrom langchain.chains import create_extraction_chain_pydantic, SimpleSequentialChainfrom pydantic import BaseModel, Fieldfrom typing import Listclass Actor(BaseModel): Actor: str = Field(description="name of an actor") Film: List[str] = Field(description="list of names of films they starred in")Parsers​llm = OpenAI()parser = PydanticOutputParser(pydantic_object=Actor)prompt = PromptTemplate( template="Extract fields from a given text.\n{format_instructions}\n{text}\n", input_variables=["text"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(text=dataset[0]["text"])output = llm(_input.to_string())parsed = parser.parse(output)parsed Actor(Actor='Tom Hanks', Film=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can'])(parsed.Actor == inp[0]["Actor"]) & (parsed.Film == inp[0]["Film"]) TrueExtractors​extractor = create_extraction_chain_pydantic(pydantic_schema=Actor, llm=model)extracted = extractor.run(dataset[1]["text"])extracted [Actor(Actor='Tom Hardy', Film=['Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk'])](extracted[0].Actor == inp[1]["Actor"]) & (extracted[0].Film == inp[1]["Film"]) TruePreviousWeb scrapingNextGraph queryingUse caseQuickstartSetup1. Define Your Data Model2. Sample Data3. Craft a Prompt Template4. Creating the Data Generator5. Generate Synthetic DataOther implementationsGenerating exemplary dataset for extraction benchmarking purposesExtraction from generated examplesParsersExtractorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Open In Colab | Open In Colab ->: compares with our case!from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.output_parsers import PydanticOutputParserfrom langchain.chains import create_extraction_chain_pydantic, SimpleSequentialChainfrom pydantic import BaseModel, Fieldfrom typing import Listclass Actor(BaseModel): Actor: str = Field(description="name of an actor") Film: List[str] = Field(description="list of names of films they starred in")Parsers​llm = OpenAI()parser = PydanticOutputParser(pydantic_object=Actor)prompt = PromptTemplate( template="Extract fields from a given text.\n{format_instructions}\n{text}\n", input_variables=["text"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(text=dataset[0]["text"])output = llm(_input.to_string())parsed = parser.parse(output)parsed Actor(Actor='Tom Hanks', Film=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can'])(parsed.Actor == inp[0]["Actor"]) & (parsed.Film == inp[0]["Film"]) TrueExtractors​extractor = create_extraction_chain_pydantic(pydantic_schema=Actor, llm=model)extracted = extractor.run(dataset[1]["text"])extracted [Actor(Actor='Tom Hardy', Film=['Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk'])](extracted[0].Actor == inp[1]["Actor"]) & (extracted[0].Film == inp[1]["Film"]) TruePreviousWeb scrapingNextGraph queryingUse caseQuickstartSetup1. Define Your Data Model2. Sample Data3. Craft a Prompt Template4. Creating the Data Generator5. Generate Synthetic DataOther implementationsGenerating exemplary dataset for extraction benchmarking purposesExtraction from generated examplesParsersExtractorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,341 | Web scraping | ü¶úÔ∏èüîó Langchain | Open In Collab | Open In Collab ->: Web scraping | ü¶úÔ∏èüîó Langchain |
1,342 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingWeb scrapingOn this pageWeb scrapingUse case‚ÄãWeb research is one of the killer LLM applications:Users have highlighted it as one of his top desired AI tools. OSS repos like gpt-researcher are growing in popularity. Overview‚ÄãGathering content from the web has a few components:Search: Query to url (e.g., using GoogleSearchAPIWrapper).Loading: Url to HTML (e.g., using AsyncHtmlLoader, AsyncChromiumLoader, etc).Transforming: HTML to formatted text (e.g., using HTML2Text or Beautiful Soup).Quickstart‚Äãpip install -q openai langchain playwright beautifulsoup4playwright install# Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()Scraping HTML content using a headless instance of Chromium.The async nature of the scraping process is handled using Python's asyncio library.The actual interaction with the web pages is handled by Playwright.from langchain.document_loaders import AsyncChromiumLoaderfrom langchain.document_transformers import BeautifulSoupTransformer# Load HTMLloader = AsyncChromiumLoader(["https://www.wsj.com"])html = loader.load()Scrape text content tags such as <p>, <li>, <div>, and <a> tags from the HTML content:<p>: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.<li>: The list item tag. It is used within ordered (<ol>) and unordered (<ul>) lists to define individual items within the list.<div>: The division tag. It is a block-level element used to group other inline or block-level elements.<a>: The anchor tag. It is used to define hyperlinks.<span>: an inline container used to mark up a part of a text, or a part of a document. For many news websites (e.g., WSJ, | Open In Collab | Open In Collab ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingWeb scrapingOn this pageWeb scrapingUse case‚ÄãWeb research is one of the killer LLM applications:Users have highlighted it as one of his top desired AI tools. OSS repos like gpt-researcher are growing in popularity. Overview‚ÄãGathering content from the web has a few components:Search: Query to url (e.g., using GoogleSearchAPIWrapper).Loading: Url to HTML (e.g., using AsyncHtmlLoader, AsyncChromiumLoader, etc).Transforming: HTML to formatted text (e.g., using HTML2Text or Beautiful Soup).Quickstart‚Äãpip install -q openai langchain playwright beautifulsoup4playwright install# Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()Scraping HTML content using a headless instance of Chromium.The async nature of the scraping process is handled using Python's asyncio library.The actual interaction with the web pages is handled by Playwright.from langchain.document_loaders import AsyncChromiumLoaderfrom langchain.document_transformers import BeautifulSoupTransformer# Load HTMLloader = AsyncChromiumLoader(["https://www.wsj.com"])html = loader.load()Scrape text content tags such as <p>, <li>, <div>, and <a> tags from the HTML content:<p>: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.<li>: The list item tag. It is used within ordered (<ol>) and unordered (<ul>) lists to define individual items within the list.<div>: The division tag. It is a block-level element used to group other inline or block-level elements.<a>: The anchor tag. It is used to define hyperlinks.<span>: an inline container used to mark up a part of a text, or a part of a document. For many news websites (e.g., WSJ, |
1,343 | of a document. For many news websites (e.g., WSJ, CNN), headlines and summaries are all in <span> tags.# Transformbs_transformer = BeautifulSoupTransformer()docs_transformed = bs_transformer.transform_documents(html,tags_to_extract=["span"])# Resultdocs_transformed[0].page_content[0:500] 'English EditionEnglish中文 (Chinese)日本語 (Japanese) More Other Products from WSJBuy Side from WSJWSJ ShopWSJ Wine Other Products from WSJ Search Quotes and Companies Search Quotes and Companies 0.15% 0.03% 0.12% -0.42% 4.102% -0.69% -0.25% -0.15% -1.82% 0.24% 0.19% -1.10% About Evan His Family Reflects His Reporting How You Can Help Write a Message Life in Detention Latest News Get Email Updates Four Americans Released From Iranian Prison The Americans will remain under house arrest until they are 'These Documents now are staged for downstream usage in various LLM apps, as discussed below.Loader​AsyncHtmlLoader​The AsyncHtmlLoader uses the aiohttp library to make asynchronous HTTP requests, suitable for simpler and lightweight scraping.AsyncChromiumLoader​The AsyncChromiumLoader uses Playwright to launch a Chromium instance, which can handle JavaScript rendering and more complex web interactions.Chromium is one of the browsers supported by Playwright, a library used to control browser automation. Headless mode means that the browser is running without a graphical user interface, which is commonly used for web scraping.from langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com","https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load()Transformer​HTML2Text​HTML2Text provides a straightforward conversion of HTML content into plain text (with markdown-like formatting) without any specific tag manipulation. It's best suited for scenarios where the goal is to extract human-readable text without needing to manipulate specific HTML elements.Beautiful Soup​Beautiful Soup offers more fine-grained | Open In Collab | Open In Collab ->: of a document. For many news websites (e.g., WSJ, CNN), headlines and summaries are all in <span> tags.# Transformbs_transformer = BeautifulSoupTransformer()docs_transformed = bs_transformer.transform_documents(html,tags_to_extract=["span"])# Resultdocs_transformed[0].page_content[0:500] 'English EditionEnglish中文 (Chinese)日本語 (Japanese) More Other Products from WSJBuy Side from WSJWSJ ShopWSJ Wine Other Products from WSJ Search Quotes and Companies Search Quotes and Companies 0.15% 0.03% 0.12% -0.42% 4.102% -0.69% -0.25% -0.15% -1.82% 0.24% 0.19% -1.10% About Evan His Family Reflects His Reporting How You Can Help Write a Message Life in Detention Latest News Get Email Updates Four Americans Released From Iranian Prison The Americans will remain under house arrest until they are 'These Documents now are staged for downstream usage in various LLM apps, as discussed below.Loader​AsyncHtmlLoader​The AsyncHtmlLoader uses the aiohttp library to make asynchronous HTTP requests, suitable for simpler and lightweight scraping.AsyncChromiumLoader​The AsyncChromiumLoader uses Playwright to launch a Chromium instance, which can handle JavaScript rendering and more complex web interactions.Chromium is one of the browsers supported by Playwright, a library used to control browser automation. Headless mode means that the browser is running without a graphical user interface, which is commonly used for web scraping.from langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com","https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load()Transformer​HTML2Text​HTML2Text provides a straightforward conversion of HTML content into plain text (with markdown-like formatting) without any specific tag manipulation. It's best suited for scenarios where the goal is to extract human-readable text without needing to manipulate specific HTML elements.Beautiful Soup​Beautiful Soup offers more fine-grained |
1,344 | Soup​Beautiful Soup offers more fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.from langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|#############################################################################################################| 2/2 [00:00<00:00, 7.01it/s]from langchain.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[0:500] "Skip to main content Skip to navigation\n\n<\n\n>\n\nMenu\n\n## ESPN\n\n * Search\n\n * * scores\n\n * NFL\n * MLB\n * NBA\n * NHL\n * Soccer\n * NCAAF\n * …\n\n * Women's World Cup\n * LLWS\n * NCAAM\n * NCAAW\n * Sports Betting\n * Boxing\n * CFL\n * NCAA\n * Cricket\n * F1\n * Golf\n * Horse\n * MMA\n * NASCAR\n * NBA G League\n * Olympic Sports\n * PLL\n * Racing\n * RN BB\n * RN FB\n * Rugby\n * Tennis\n * WNBA\n * WWE\n * X Games\n * XFL\n\n * More"Scraping with extraction​LLM with function calling​Web scraping is challenging for many reasons. One of them is the changing nature of modern websites' layouts and content, which requires modifying scraping scripts to accommodate the changes.Using Function (e.g., OpenAI) with an extraction chain, we avoid having to change your code constantly when websites change. We're using gpt-3.5-turbo-0613 to guarantee access to OpenAI Functions feature (although this might be available to everyone by time of writing). We're also keeping temperature at 0 to keep randomness of the LLM down.from langchain.chat_models import | Open In Collab | Open In Collab ->: Soup​Beautiful Soup offers more fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.from langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|#############################################################################################################| 2/2 [00:00<00:00, 7.01it/s]from langchain.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[0:500] "Skip to main content Skip to navigation\n\n<\n\n>\n\nMenu\n\n## ESPN\n\n * Search\n\n * * scores\n\n * NFL\n * MLB\n * NBA\n * NHL\n * Soccer\n * NCAAF\n * …\n\n * Women's World Cup\n * LLWS\n * NCAAM\n * NCAAW\n * Sports Betting\n * Boxing\n * CFL\n * NCAA\n * Cricket\n * F1\n * Golf\n * Horse\n * MMA\n * NASCAR\n * NBA G League\n * Olympic Sports\n * PLL\n * Racing\n * RN BB\n * RN FB\n * Rugby\n * Tennis\n * WNBA\n * WWE\n * X Games\n * XFL\n\n * More"Scraping with extraction​LLM with function calling​Web scraping is challenging for many reasons. One of them is the changing nature of modern websites' layouts and content, which requires modifying scraping scripts to accommodate the changes.Using Function (e.g., OpenAI) with an extraction chain, we avoid having to change your code constantly when websites change. We're using gpt-3.5-turbo-0613 to guarantee access to OpenAI Functions feature (although this might be available to everyone by time of writing). We're also keeping temperature at 0 to keep randomness of the LLM down.from langchain.chat_models import |
1,345 | of the LLM down.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")Define a schema‚ÄãNext, you define a schema to specify what kind of data you want to extract. Here, the key names matter as they tell the LLM what kind of information they want. So, be as detailed as possible. In this example, we want to scrape only news article's name and summary from The Wall Street Journal website.from langchain.chains import create_extraction_chainschema = { "properties": { "news_article_title": {"type": "string"}, "news_article_summary": {"type": "string"}, }, "required": ["news_article_title", "news_article_summary"],}def extract(content: str, schema: dict): return create_extraction_chain(schema=schema, llm=llm).run(content)Run the web scraper w/ BeautifulSoup‚ÄãAs shown above, we'll be using BeautifulSoupTransformer.import pprintfrom langchain.text_splitter import RecursiveCharacterTextSplitterdef scrape_with_playwright(urls, schema): loader = AsyncChromiumLoader(urls) docs = loader.load() bs_transformer = BeautifulSoupTransformer() docs_transformed = bs_transformer.transform_documents(docs,tags_to_extract=["span"]) print("Extracting content with LLM") # Grab the first 1000 tokens of the site splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(chunk_size=1000, chunk_overlap=0) splits = splitter.split_documents(docs_transformed) # Process the first split extracted_content = extract( schema=schema, content=splits[0].page_content ) pprint.pprint(extracted_content) return extracted_contenturls = ["https://www.wsj.com"]extracted_content = scrape_with_playwright(urls, schema=schema) Extracting content with LLM [{'news_article_summary': 'The Americans will remain under house arrest until ' 'they are allowed to return to the U.S. in | Open In Collab | Open In Collab ->: of the LLM down.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")Define a schema‚ÄãNext, you define a schema to specify what kind of data you want to extract. Here, the key names matter as they tell the LLM what kind of information they want. So, be as detailed as possible. In this example, we want to scrape only news article's name and summary from The Wall Street Journal website.from langchain.chains import create_extraction_chainschema = { "properties": { "news_article_title": {"type": "string"}, "news_article_summary": {"type": "string"}, }, "required": ["news_article_title", "news_article_summary"],}def extract(content: str, schema: dict): return create_extraction_chain(schema=schema, llm=llm).run(content)Run the web scraper w/ BeautifulSoup‚ÄãAs shown above, we'll be using BeautifulSoupTransformer.import pprintfrom langchain.text_splitter import RecursiveCharacterTextSplitterdef scrape_with_playwright(urls, schema): loader = AsyncChromiumLoader(urls) docs = loader.load() bs_transformer = BeautifulSoupTransformer() docs_transformed = bs_transformer.transform_documents(docs,tags_to_extract=["span"]) print("Extracting content with LLM") # Grab the first 1000 tokens of the site splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(chunk_size=1000, chunk_overlap=0) splits = splitter.split_documents(docs_transformed) # Process the first split extracted_content = extract( schema=schema, content=splits[0].page_content ) pprint.pprint(extracted_content) return extracted_contenturls = ["https://www.wsj.com"]extracted_content = scrape_with_playwright(urls, schema=schema) Extracting content with LLM [{'news_article_summary': 'The Americans will remain under house arrest until ' 'they are allowed to return to the U.S. in |
1,346 | 'they are allowed to return to the U.S. in coming ' 'weeks, following a monthslong diplomatic push by ' 'the Biden administration.', 'news_article_title': 'Four Americans Released From Iranian Prison'}, {'news_article_summary': 'Price pressures continued cooling last month, with ' 'the CPI rising a mild 0.2% from June, likely ' 'deterring the Federal Reserve from raising interest ' 'rates at its September meeting.', 'news_article_title': 'Cooler July Inflation Opens Door to Fed Pause on ' 'Rates'}, {'news_article_summary': 'The company has decided to eliminate 27 of its 30 ' 'clothing labels, such as Lark & Ro and Goodthreads, ' 'as it works to fend off antitrust scrutiny and cut ' 'costs.', 'news_article_title': 'Amazon Cuts Dozens of House Brands'}, {'news_article_summary': 'President Biden’s order comes on top of a slowing ' 'Chinese economy, Covid lockdowns and rising ' 'tensions between the two powers.', 'news_article_title': 'U.S. Investment Ban on China Poised to Deepen Divide'}, {'news_article_summary': 'The proposed trial date in the ' 'election-interference case comes on the same day as ' 'the former president’s not guilty plea on ' 'additional Mar-a-Lago charges.', 'news_article_title': 'Trump Should Be Tried in January, Prosecutors Tell ' 'Judge'}, {'news_article_summary': 'The CEO who started in June says the platform has ' '“an entirely different road map” for the future.', 'news_article_title': 'Yaccarino Says X Is Watching Threads but | Open In Collab | Open In Collab ->: 'they are allowed to return to the U.S. in coming ' 'weeks, following a monthslong diplomatic push by ' 'the Biden administration.', 'news_article_title': 'Four Americans Released From Iranian Prison'}, {'news_article_summary': 'Price pressures continued cooling last month, with ' 'the CPI rising a mild 0.2% from June, likely ' 'deterring the Federal Reserve from raising interest ' 'rates at its September meeting.', 'news_article_title': 'Cooler July Inflation Opens Door to Fed Pause on ' 'Rates'}, {'news_article_summary': 'The company has decided to eliminate 27 of its 30 ' 'clothing labels, such as Lark & Ro and Goodthreads, ' 'as it works to fend off antitrust scrutiny and cut ' 'costs.', 'news_article_title': 'Amazon Cuts Dozens of House Brands'}, {'news_article_summary': 'President Biden’s order comes on top of a slowing ' 'Chinese economy, Covid lockdowns and rising ' 'tensions between the two powers.', 'news_article_title': 'U.S. Investment Ban on China Poised to Deepen Divide'}, {'news_article_summary': 'The proposed trial date in the ' 'election-interference case comes on the same day as ' 'the former president’s not guilty plea on ' 'additional Mar-a-Lago charges.', 'news_article_title': 'Trump Should Be Tried in January, Prosecutors Tell ' 'Judge'}, {'news_article_summary': 'The CEO who started in June says the platform has ' '“an entirely different road map” for the future.', 'news_article_title': 'Yaccarino Says X Is Watching Threads but |
1,347 | 'Yaccarino Says X Is Watching Threads but Has Its Own ' 'Vision'}, {'news_article_summary': 'Students foot the bill for flagship state ' 'universities that pour money into new buildings and ' 'programs with little pushback.', 'news_article_title': 'Colleges Spend Like There’s No Tomorrow. ‘These ' 'Places Are Just Devouring Money.’'}, {'news_article_summary': 'Wildfires fanned by hurricane winds have torn ' 'through parts of the Hawaiian island, devastating ' 'the popular tourist town of Lahaina.', 'news_article_title': 'Maui Wildfires Leave at Least 36 Dead'}, {'news_article_summary': 'After its large armored push stalled, Kyiv has ' 'fallen back on the kind of tactics that brought it ' 'success earlier in the war.', 'news_article_title': 'Ukraine Uses Small-Unit Tactics to Retake Captured ' 'Territory'}, {'news_article_summary': 'President Guillermo Lasso says the Aug. 20 election ' 'will proceed, as the Andean country grapples with ' 'rising drug gang violence.', 'news_article_title': 'Ecuador Declares State of Emergency After ' 'Presidential Hopeful Killed'}, {'news_article_summary': 'This year’s hurricane season, which typically runs ' 'from June to the end of November, has been ' 'difficult to predict, climate scientists said.', 'news_article_title': 'Atlantic Hurricane Season Prediction Increased to ' '‘Above Normal,’ NOAA Says'}, {'news_article_summary': 'The NFL is raising the price of its NFL+ streaming ' 'packages as it adds the NFL Network and | Open In Collab | Open In Collab ->: 'Yaccarino Says X Is Watching Threads but Has Its Own ' 'Vision'}, {'news_article_summary': 'Students foot the bill for flagship state ' 'universities that pour money into new buildings and ' 'programs with little pushback.', 'news_article_title': 'Colleges Spend Like There’s No Tomorrow. ‘These ' 'Places Are Just Devouring Money.’'}, {'news_article_summary': 'Wildfires fanned by hurricane winds have torn ' 'through parts of the Hawaiian island, devastating ' 'the popular tourist town of Lahaina.', 'news_article_title': 'Maui Wildfires Leave at Least 36 Dead'}, {'news_article_summary': 'After its large armored push stalled, Kyiv has ' 'fallen back on the kind of tactics that brought it ' 'success earlier in the war.', 'news_article_title': 'Ukraine Uses Small-Unit Tactics to Retake Captured ' 'Territory'}, {'news_article_summary': 'President Guillermo Lasso says the Aug. 20 election ' 'will proceed, as the Andean country grapples with ' 'rising drug gang violence.', 'news_article_title': 'Ecuador Declares State of Emergency After ' 'Presidential Hopeful Killed'}, {'news_article_summary': 'This year’s hurricane season, which typically runs ' 'from June to the end of November, has been ' 'difficult to predict, climate scientists said.', 'news_article_title': 'Atlantic Hurricane Season Prediction Increased to ' '‘Above Normal,’ NOAA Says'}, {'news_article_summary': 'The NFL is raising the price of its NFL+ streaming ' 'packages as it adds the NFL Network and |
1,348 | 'packages as it adds the NFL Network and RedZone.', 'news_article_title': 'NFL to Raise Price of NFL+ Streaming Packages as It ' 'Adds NFL Network, RedZone'}, {'news_article_summary': 'Russia is planning a moon mission as part of the ' 'new space race.', 'news_article_title': 'Russia’s Moon Mission and the New Space Race'}, {'news_article_summary': 'Tapestry’s $8.5 billion acquisition of Capri would ' 'create a conglomerate with more than $12 billion in ' 'annual sales, but it would still lack the ' 'high-wattage labels and diversity that have fueled ' 'LVMH’s success.', 'news_article_title': "Why the Coach and Kors Marriage Doesn't Scare LVMH"}, {'news_article_summary': 'The Supreme Court has blocked Purdue Pharma’s $6 ' 'billion Sackler opioid settlement.', 'news_article_title': 'Supreme Court Blocks Purdue Pharma’s $6 Billion ' 'Sackler Opioid Settlement'}, {'news_article_summary': 'The Social Security COLA is expected to rise in ' '2024, but not by a lot.', 'news_article_title': 'Social Security COLA Expected to Rise in 2024, but ' 'Not by a Lot'}]We can compare the headlines scraped to the page:Looking at the LangSmith trace, we can see what is going on under the hood:It's following what is explained in the extraction.We call the information_extraction function on the input text.It will attempt to populate the provided schema from the url content.Research automation​Related to scraping, we may want to answer specific questions using searched content.We can automate the process of web research using a retriever, such as the WebResearchRetriever (docs).Copy requirements from here:pip install -r requirements.txtSet | Open In Collab | Open In Collab ->: 'packages as it adds the NFL Network and RedZone.', 'news_article_title': 'NFL to Raise Price of NFL+ Streaming Packages as It ' 'Adds NFL Network, RedZone'}, {'news_article_summary': 'Russia is planning a moon mission as part of the ' 'new space race.', 'news_article_title': 'Russia’s Moon Mission and the New Space Race'}, {'news_article_summary': 'Tapestry’s $8.5 billion acquisition of Capri would ' 'create a conglomerate with more than $12 billion in ' 'annual sales, but it would still lack the ' 'high-wattage labels and diversity that have fueled ' 'LVMH’s success.', 'news_article_title': "Why the Coach and Kors Marriage Doesn't Scare LVMH"}, {'news_article_summary': 'The Supreme Court has blocked Purdue Pharma’s $6 ' 'billion Sackler opioid settlement.', 'news_article_title': 'Supreme Court Blocks Purdue Pharma’s $6 Billion ' 'Sackler Opioid Settlement'}, {'news_article_summary': 'The Social Security COLA is expected to rise in ' '2024, but not by a lot.', 'news_article_title': 'Social Security COLA Expected to Rise in 2024, but ' 'Not by a Lot'}]We can compare the headlines scraped to the page:Looking at the LangSmith trace, we can see what is going on under the hood:It's following what is explained in the extraction.We call the information_extraction function on the input text.It will attempt to populate the provided schema from the url content.Research automation​Related to scraping, we may want to answer specific questions using searched content.We can automate the process of web research using a retriever, such as the WebResearchRetriever (docs).Copy requirements from here:pip install -r requirements.txtSet |
1,349 | from here:pip install -r requirements.txtSet GOOGLE_CSE_ID and GOOGLE_API_KEY.from langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.utilities import GoogleSearchAPIWrapperfrom langchain.retrievers.web_research import WebResearchRetriever# Vectorstorevectorstore = Chroma(embedding_function=OpenAIEmbeddings(),persist_directory="./chroma_db_oai")# LLMllm = ChatOpenAI(temperature=0)# Search search = GoogleSearchAPIWrapper()Initialize retriever with the above tools to:Use an LLM to generate multiple relevant search queries (one LLM call)Execute a search for each queryChoose the top K links per query (multiple search calls in parallel)Load the information from all chosen links (scrape pages in parallel)Index those documents into a vectorstoreFind the most relevant documents for each original generated search query# Initializeweb_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore, llm=llm, search=search)# Runimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)from langchain.chains import RetrievalQAWithSourcesChainuser_input = "How do LLM Powered Autonomous Agents work?"qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)result = qa_chain({"question": user_input})result INFO:langchain.retrievers.web_research:Generating questions for Google Search ... INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'How do LLM Powered Autonomous Agents work?', 'text': LineList(lines=['1. What is the functioning principle of LLM Powered Autonomous Agents?\n', '2. How do LLM Powered Autonomous Agents operate?\n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. What is the functioning principle of LLM Powered Autonomous Agents?\n', '2. How do LLM Powered Autonomous Agents operate?\n'] | Open In Collab | Open In Collab ->: from here:pip install -r requirements.txtSet GOOGLE_CSE_ID and GOOGLE_API_KEY.from langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.utilities import GoogleSearchAPIWrapperfrom langchain.retrievers.web_research import WebResearchRetriever# Vectorstorevectorstore = Chroma(embedding_function=OpenAIEmbeddings(),persist_directory="./chroma_db_oai")# LLMllm = ChatOpenAI(temperature=0)# Search search = GoogleSearchAPIWrapper()Initialize retriever with the above tools to:Use an LLM to generate multiple relevant search queries (one LLM call)Execute a search for each queryChoose the top K links per query (multiple search calls in parallel)Load the information from all chosen links (scrape pages in parallel)Index those documents into a vectorstoreFind the most relevant documents for each original generated search query# Initializeweb_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore, llm=llm, search=search)# Runimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)from langchain.chains import RetrievalQAWithSourcesChainuser_input = "How do LLM Powered Autonomous Agents work?"qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)result = qa_chain({"question": user_input})result INFO:langchain.retrievers.web_research:Generating questions for Google Search ... INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'How do LLM Powered Autonomous Agents work?', 'text': LineList(lines=['1. What is the functioning principle of LLM Powered Autonomous Agents?\n', '2. How do LLM Powered Autonomous Agents operate?\n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. What is the functioning principle of LLM Powered Autonomous Agents?\n', '2. How do LLM Powered Autonomous Agents operate?\n'] |
1,350 | do LLM Powered Autonomous Agents operate?\n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': 'LLM Powered Autonomous Agents | Hacker News', 'link': 'https://news.ycombinator.com/item?id=36488871', 'snippet': 'Jun 26, 2023 ... Exactly. A temperature of 0 means you always pick the highest probability token (i.e. the "max" function), while a temperature of 1 means you\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2) by\xa0...'}] INFO:langchain.retrievers.web_research:New URLs to load: [] INFO:langchain.retrievers.web_research:Grabbing most relevant splits from urls... {'question': 'How do LLM Powered Autonomous Agents work?', 'answer': "LLM-powered autonomous agents work by using LLM as the agent's brain, complemented by several key components such as planning, memory, and tool use. In terms of planning, the agent breaks down large tasks into smaller subgoals and can reflect and refine its actions based on past experiences. Memory is divided into short-term memory, which is used for in-context learning, and long-term memory, which allows the agent to retain and recall information over extended periods. Tool use involves the agent calling external APIs for additional information. These agents have been used in various applications, including scientific discovery and generative agents simulation.", 'sources': ''}Going deeper‚ÄãHere's a app that wraps this retriever with a lighweight UI.Question answering over a | Open In Collab | Open In Collab ->: do LLM Powered Autonomous Agents operate?\n'] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': 'LLM Powered Autonomous Agents | Hacker News', 'link': 'https://news.ycombinator.com/item?id=36488871', 'snippet': 'Jun 26, 2023 ... Exactly. A temperature of 0 means you always pick the highest probability token (i.e. the "max" function), while a temperature of 1 means you\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevant urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2) by\xa0...'}] INFO:langchain.retrievers.web_research:New URLs to load: [] INFO:langchain.retrievers.web_research:Grabbing most relevant splits from urls... {'question': 'How do LLM Powered Autonomous Agents work?', 'answer': "LLM-powered autonomous agents work by using LLM as the agent's brain, complemented by several key components such as planning, memory, and tool use. In terms of planning, the agent breaks down large tasks into smaller subgoals and can reflect and refine its actions based on past experiences. Memory is divided into short-term memory, which is used for in-context learning, and long-term memory, which allows the agent to retain and recall information over extended periods. Tool use involves the agent calling external APIs for additional information. These agents have been used in various applications, including scientific discovery and generative agents simulation.", 'sources': ''}Going deeper‚ÄãHere's a app that wraps this retriever with a lighweight UI.Question answering over a |
1,351 | with a lighweight UI.Question answering over a website‚ÄãTo answer questions over a specific website, you can use Apify's Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, | Open In Collab | Open In Collab ->: with a lighweight UI.Question answering over a website‚ÄãTo answer questions over a specific website, you can use Apify's Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, |
1,352 | and extract text content from the web pages.In the example below, we will deeply crawl the Python documentation of LangChain's Chat LLM models and answer a question over it.First, install the requirements
pip install apify-client openai langchain chromadb tiktokenNext, set OPENAI_API_KEY and APIFY_API_TOKEN in your environment variables.The full code follows:from langchain.docstore.document import Documentfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.utilities import ApifyWrapperapify = ApifyWrapper()# Call the Actor to obtain text from the crawled webpagesloader = apify.call_actor( actor_id="apify/website-content-crawler", run_input={"startUrls": [{"url": "https://python.langchain.com/docs/integrations/chat/"}]}, dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)# Create a vector store based on the crawled dataindex = VectorstoreIndexCreator().from_loaders([loader])# Query the vector storequery = "Are any OpenAI chat models integrated in LangChain?"result = index.query(query)print(result) Yes, LangChain offers integration with OpenAI chat models. You can use the ChatOpenAI class to interact with OpenAI models.PreviousTaggingNextSynthetic data generationUse caseOverviewQuickstartLoaderAsyncHtmlLoaderAsyncChromiumLoaderTransformerHTML2TextBeautiful SoupScraping with extractionLLM with function callingDefine a schemaRun the web scraper w/ BeautifulSoupResearch automationGoing deeperQuestion answering over a websiteCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Open In Collab | Open In Collab ->: and extract text content from the web pages.In the example below, we will deeply crawl the Python documentation of LangChain's Chat LLM models and answer a question over it.First, install the requirements
pip install apify-client openai langchain chromadb tiktokenNext, set OPENAI_API_KEY and APIFY_API_TOKEN in your environment variables.The full code follows:from langchain.docstore.document import Documentfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.utilities import ApifyWrapperapify = ApifyWrapper()# Call the Actor to obtain text from the crawled webpagesloader = apify.call_actor( actor_id="apify/website-content-crawler", run_input={"startUrls": [{"url": "https://python.langchain.com/docs/integrations/chat/"}]}, dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)# Create a vector store based on the crawled dataindex = VectorstoreIndexCreator().from_loaders([loader])# Query the vector storequery = "Are any OpenAI chat models integrated in LangChain?"result = index.query(query)print(result) Yes, LangChain offers integration with OpenAI chat models. You can use the ChatOpenAI class to interact with OpenAI models.PreviousTaggingNextSynthetic data generationUse caseOverviewQuickstartLoaderAsyncHtmlLoaderAsyncChromiumLoaderTransformerHTML2TextBeautiful SoupScraping with extractionLLM with function callingDefine a schemaRun the web scraper w/ BeautifulSoupResearch automationGoing deeperQuestion answering over a websiteCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,353 | Interacting with APIs | ü¶úÔ∏èüîó Langchain | Open In Colab | Open In Colab ->: Interacting with APIs | ü¶úÔ∏èüîó Langchain |
1,354 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingInteracting with APIsOn this pageInteracting with APIsUse case‚ÄãSuppose you want an LLM to interact with external APIs.This can be very useful for retrieving context for the LLM to utilize.And, more generally, it allows us to interact with APIs using natural language! Overview‚ÄãThere are two primary ways to interface LLMs with external APIs:Functions: For example, OpenAI functions is one popular means of doing this.LLM-generated interface: Use an LLM with access to API documentation to create an interface.Quickstart‚ÄãMany APIs are already compatible with OpenAI function calling.For example, Klarna has a YAML file that describes its API and allows OpenAI to interact with it:https://www.klarna.com/us/shopping/public/openai/v0/api-docs/Other options include:Speak for translationXKCD for comicsWe can supply the specification to get_openapi_chain directly in order to query the API with OpenAI functions:pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chains.openai_functions.openapi import get_openapi_chainchain = get_openapi_chain("https://www.klarna.com/us/shopping/public/openai/v0/api-docs/")chain("What are some options for a men's large blue button down shirt") Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. {'query': "What are some options for a men's large blue button down shirt", 'response': {'products': [{'name': 'Cubavera Four Pocket Guayabera Shirt', 'url': | Open In Colab | Open In Colab ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingInteracting with APIsOn this pageInteracting with APIsUse case‚ÄãSuppose you want an LLM to interact with external APIs.This can be very useful for retrieving context for the LLM to utilize.And, more generally, it allows us to interact with APIs using natural language! Overview‚ÄãThere are two primary ways to interface LLMs with external APIs:Functions: For example, OpenAI functions is one popular means of doing this.LLM-generated interface: Use an LLM with access to API documentation to create an interface.Quickstart‚ÄãMany APIs are already compatible with OpenAI function calling.For example, Klarna has a YAML file that describes its API and allows OpenAI to interact with it:https://www.klarna.com/us/shopping/public/openai/v0/api-docs/Other options include:Speak for translationXKCD for comicsWe can supply the specification to get_openapi_chain directly in order to query the API with OpenAI functions:pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chains.openai_functions.openapi import get_openapi_chainchain = get_openapi_chain("https://www.klarna.com/us/shopping/public/openai/v0/api-docs/")chain("What are some options for a men's large blue button down shirt") Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. {'query': "What are some options for a men's large blue button down shirt", 'response': {'products': [{'name': 'Cubavera Four Pocket Guayabera Shirt', 'url': |
1,355 | Four Pocket Guayabera Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202055522/Clothing/Cubavera-Four-Pocket-Guayabera-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$13.50', 'attributes': ['Material:Polyester,Cotton', 'Target Group:Man', 'Color:Red,White,Blue,Black', 'Properties:Pockets', 'Pattern:Solid Color', 'Size (Small-Large):S,XL,L,M,XXL']}, {'name': 'Polo Ralph Lauren Plaid Short Sleeve Button-down Oxford Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3207163438/Clothing/Polo-Ralph-Lauren-Plaid-Short-Sleeve-Button-down-Oxford-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$52.20', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Red,Blue,Multicolor', 'Size (Small-Large):S,XL,L,M,XXL']}, {'name': 'Brixton Bowery Flannel Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202331096/Clothing/Brixton-Bowery-Flannel-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$27.48', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Gray,Blue,Black,Orange', 'Properties:Pockets', 'Pattern:Checkered', 'Size (Small-Large):XL,3XL,4XL,5XL,L,M,XXL']}, {'name': 'Vineyard Vines Gingham On-The-Go brrr Classic Fit Shirt Crystal', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3201938510/Clothing/Vineyard-Vines-Gingham-On-The-Go-brrr-Classic-Fit-Shirt-Crystal/?utm_source=openai&ref-site=openai_plugin', 'price': '$80.64', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Blue', 'Size (Small-Large):XL,XS,L,M']}, {'name': "Carhartt Men's Loose Fit Midweight Short Sleeve Plaid Shirt", 'url': | Open In Colab | Open In Colab ->: Four Pocket Guayabera Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202055522/Clothing/Cubavera-Four-Pocket-Guayabera-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$13.50', 'attributes': ['Material:Polyester,Cotton', 'Target Group:Man', 'Color:Red,White,Blue,Black', 'Properties:Pockets', 'Pattern:Solid Color', 'Size (Small-Large):S,XL,L,M,XXL']}, {'name': 'Polo Ralph Lauren Plaid Short Sleeve Button-down Oxford Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3207163438/Clothing/Polo-Ralph-Lauren-Plaid-Short-Sleeve-Button-down-Oxford-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$52.20', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Red,Blue,Multicolor', 'Size (Small-Large):S,XL,L,M,XXL']}, {'name': 'Brixton Bowery Flannel Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202331096/Clothing/Brixton-Bowery-Flannel-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$27.48', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Gray,Blue,Black,Orange', 'Properties:Pockets', 'Pattern:Checkered', 'Size (Small-Large):XL,3XL,4XL,5XL,L,M,XXL']}, {'name': 'Vineyard Vines Gingham On-The-Go brrr Classic Fit Shirt Crystal', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3201938510/Clothing/Vineyard-Vines-Gingham-On-The-Go-brrr-Classic-Fit-Shirt-Crystal/?utm_source=openai&ref-site=openai_plugin', 'price': '$80.64', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Blue', 'Size (Small-Large):XL,XS,L,M']}, {'name': "Carhartt Men's Loose Fit Midweight Short Sleeve Plaid Shirt", 'url': |
1,356 | Short Sleeve Plaid Shirt", 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3201826024/Clothing/Carhartt-Men-s-Loose-Fit-Midweight-Short-Sleeve-Plaid-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$17.99', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Red,Brown,Blue,Green', 'Properties:Pockets', 'Pattern:Checkered', 'Size (Small-Large):S,XL,L,M']}]}}Functions‚ÄãWe can unpack what is happening when we use the functions to call external APIs.Let's look at the LangSmith trace:See here that we call the OpenAI LLM with the provided API spec:https://www.klarna.com/us/shopping/public/openai/v0/api-docs/The prompt then tells the LLM to use the API spec with input question:Use the provided APIs to respond to this user query:What are some options for a men's large blue button down shirtThe LLM returns the parameters for the function call productsUsingGET, which is specified in the provided API spec:function_call: name: productsUsingGET arguments: |- { "params": { "countryCode": "US", "q": "men's large blue button down shirt", "size": 5, "min_price": 0, "max_price": 100 } }This Dict above split and the API is called here.API Chain‚ÄãWe can also build our own interface to external APIs using the APIChain and provided API documentation.from langchain.llms import OpenAIfrom langchain.chains import APIChainfrom langchain.chains.api import open_meteo_docsllm = OpenAI(temperature=0)chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)chain.run('What is the weather like right now in Munich, Germany in degrees Fahrenheit?') > Entering new APIChain chain... https://api.open-meteo.com/v1/forecast?latitude=48.1351&longitude=11.5820&hourly=temperature_2m&temperature_unit=fahrenheit¤t_weather=true | Open In Colab | Open In Colab ->: Short Sleeve Plaid Shirt", 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3201826024/Clothing/Carhartt-Men-s-Loose-Fit-Midweight-Short-Sleeve-Plaid-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$17.99', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Red,Brown,Blue,Green', 'Properties:Pockets', 'Pattern:Checkered', 'Size (Small-Large):S,XL,L,M']}]}}Functions‚ÄãWe can unpack what is happening when we use the functions to call external APIs.Let's look at the LangSmith trace:See here that we call the OpenAI LLM with the provided API spec:https://www.klarna.com/us/shopping/public/openai/v0/api-docs/The prompt then tells the LLM to use the API spec with input question:Use the provided APIs to respond to this user query:What are some options for a men's large blue button down shirtThe LLM returns the parameters for the function call productsUsingGET, which is specified in the provided API spec:function_call: name: productsUsingGET arguments: |- { "params": { "countryCode": "US", "q": "men's large blue button down shirt", "size": 5, "min_price": 0, "max_price": 100 } }This Dict above split and the API is called here.API Chain‚ÄãWe can also build our own interface to external APIs using the APIChain and provided API documentation.from langchain.llms import OpenAIfrom langchain.chains import APIChainfrom langchain.chains.api import open_meteo_docsllm = OpenAI(temperature=0)chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)chain.run('What is the weather like right now in Munich, Germany in degrees Fahrenheit?') > Entering new APIChain chain... https://api.open-meteo.com/v1/forecast?latitude=48.1351&longitude=11.5820&hourly=temperature_2m&temperature_unit=fahrenheit¤t_weather=true |
1,357 | {"latitude":48.14,"longitude":11.58,"generationtime_ms":1.0769367218017578,"utc_offset_seconds":0,"timezone":"GMT","timezone_abbreviation":"GMT","elevation":521.0,"current_weather":{"temperature":52.9,"windspeed":12.6,"winddirection":239.0,"weathercode":3,"is_day":0,"time":"2023-08-07T22:00"},"hourly_units":{"time":"iso8601","temperature_2m":"°F"},"hourly":{"time":["2023-08-07T00:00","2023-08-07T01:00","2023-08-07T02:00","2023-08-07T03:00","2023-08-07T04:00","2023-08-07T05:00","2023-08-07T06:00","2023-08-07T07:00","2023-08-07T08:00","2023-08-07T09:00","2023-08-07T10:00","2023-08-07T11:00","2023-08-07T12:00","2023-08-07T13:00","2023-08-07T14:00","2023-08-07T15:00","2023-08-07T16:00","2023-08-07T17:00","2023-08-07T18:00","2023-08-07T19:00","2023-08-07T20:00","2023-08-07T21:00","2023-08-07T22:00","2023-08-07T23:00","2023-08-08T00:00","2023-08-08T01:00","2023-08-08T02:00","2023-08-08T03:00","2023-08-08T04:00","2023-08-08T05:00","2023-08-08T06:00","2023-08-08T07:00","2023-08-08T08:00","2023-08-08T09:00","2023-08-08T10:00","2023-08-08T11:00","2023-08-08T12:00","2023-08-08T13:00","2023-08-08T14:00","2023-08-08T15:00","2023-08-08T16:00","2023-08-08T17:00","2023-08-08T18:00","2023-08-08T19:00","2023-08-08T20:00","2023-08-08T21:00","2023-08-08T22:00","2023-08-08T23:00","2023-08-09T00:00","2023-08-09T01:00","2023-08-09T02:00","2023-08-09T03:00","2023-08-09T04:00","2023-08-09T05:00","2023-08-09T06:00","2023-08-09T07:00","2023-08-09T08:00","2023-08-09T09:00","2023-08-09T10:00","2023-08-09T11:00","2023-08-09T12:00","2023-08-09T13:00","2023-08-09T14:00","2023-08-09T15:00","2023-08-09T16:00","2023-08-09T17:00","2023-08-09T18:00","2023-08-09T19:00","2023-08-09T20:00","2023-08-09T21:00","2023-08-09T22:00","2023-08-09T23:00","2023-08-10T00:00","2023-08-10T01:00","2023-08-10T02:00","2023-08-10T03:00","2023-08-10T04:00","2023-08-10T05:00","2023-08-10T06:00","2023-08-10T07:00","2023-08-10T08:00","2023-08-10T09:00","2023-08-10T10:00","2023-08-10T11:00","2023-08-10T12:00","2023-08-10T13: | Open In Colab | Open In Colab ->: {"latitude":48.14,"longitude":11.58,"generationtime_ms":1.0769367218017578,"utc_offset_seconds":0,"timezone":"GMT","timezone_abbreviation":"GMT","elevation":521.0,"current_weather":{"temperature":52.9,"windspeed":12.6,"winddirection":239.0,"weathercode":3,"is_day":0,"time":"2023-08-07T22:00"},"hourly_units":{"time":"iso8601","temperature_2m":"°F"},"hourly":{"time":["2023-08-07T00:00","2023-08-07T01:00","2023-08-07T02:00","2023-08-07T03:00","2023-08-07T04:00","2023-08-07T05:00","2023-08-07T06:00","2023-08-07T07:00","2023-08-07T08:00","2023-08-07T09:00","2023-08-07T10:00","2023-08-07T11:00","2023-08-07T12:00","2023-08-07T13:00","2023-08-07T14:00","2023-08-07T15:00","2023-08-07T16:00","2023-08-07T17:00","2023-08-07T18:00","2023-08-07T19:00","2023-08-07T20:00","2023-08-07T21:00","2023-08-07T22:00","2023-08-07T23:00","2023-08-08T00:00","2023-08-08T01:00","2023-08-08T02:00","2023-08-08T03:00","2023-08-08T04:00","2023-08-08T05:00","2023-08-08T06:00","2023-08-08T07:00","2023-08-08T08:00","2023-08-08T09:00","2023-08-08T10:00","2023-08-08T11:00","2023-08-08T12:00","2023-08-08T13:00","2023-08-08T14:00","2023-08-08T15:00","2023-08-08T16:00","2023-08-08T17:00","2023-08-08T18:00","2023-08-08T19:00","2023-08-08T20:00","2023-08-08T21:00","2023-08-08T22:00","2023-08-08T23:00","2023-08-09T00:00","2023-08-09T01:00","2023-08-09T02:00","2023-08-09T03:00","2023-08-09T04:00","2023-08-09T05:00","2023-08-09T06:00","2023-08-09T07:00","2023-08-09T08:00","2023-08-09T09:00","2023-08-09T10:00","2023-08-09T11:00","2023-08-09T12:00","2023-08-09T13:00","2023-08-09T14:00","2023-08-09T15:00","2023-08-09T16:00","2023-08-09T17:00","2023-08-09T18:00","2023-08-09T19:00","2023-08-09T20:00","2023-08-09T21:00","2023-08-09T22:00","2023-08-09T23:00","2023-08-10T00:00","2023-08-10T01:00","2023-08-10T02:00","2023-08-10T03:00","2023-08-10T04:00","2023-08-10T05:00","2023-08-10T06:00","2023-08-10T07:00","2023-08-10T08:00","2023-08-10T09:00","2023-08-10T10:00","2023-08-10T11:00","2023-08-10T12:00","2023-08-10T13: |
1,358 | 23-08-10T11:00","2023-08-10T12:00","2023-08-10T13:00","2023-08-10T14:00","2023-08-10T15:00","2023-08-10T16:00","2023-08-10T17:00","2023-08-10T18:00","2023-08-10T19:00","2023-08-10T20:00","2023-08-10T21:00","2023-08-10T22:00","2023-08-10T23:00","2023-08-11T00:00","2023-08-11T01:00","2023-08-11T02:00","2023-08-11T03:00","2023-08-11T04:00","2023-08-11T05:00","2023-08-11T06:00","2023-08-11T07:00","2023-08-11T08:00","2023-08-11T09:00","2023-08-11T10:00","2023-08-11T11:00","2023-08-11T12:00","2023-08-11T13:00","2023-08-11T14:00","2023-08-11T15:00","2023-08-11T16:00","2023-08-11T17:00","2023-08-11T18:00","2023-08-11T19:00","2023-08-11T20:00","2023-08-11T21:00","2023-08-11T22:00","2023-08-11T23:00","2023-08-12T00:00","2023-08-12T01:00","2023-08-12T02:00","2023-08-12T03:00","2023-08-12T04:00","2023-08-12T05:00","2023-08-12T06:00","2023-08-12T07:00","2023-08-12T08:00","2023-08-12T09:00","2023-08-12T10:00","2023-08-12T11:00","2023-08-12T12:00","2023-08-12T13:00","2023-08-12T14:00","2023-08-12T15:00","2023-08-12T16:00","2023-08-12T17:00","2023-08-12T18:00","2023-08-12T19:00","2023-08-12T20:00","2023-08-12T21:00","2023-08-12T22:00","2023-08-12T23:00","2023-08-13T00:00","2023-08-13T01:00","2023-08-13T02:00","2023-08-13T03:00","2023-08-13T04:00","2023-08-13T05:00","2023-08-13T06:00","2023-08-13T07:00","2023-08-13T08:00","2023-08-13T09:00","2023-08-13T10:00","2023-08-13T11:00","2023-08-13T12:00","2023-08-13T13:00","2023-08-13T14:00","2023-08-13T15:00","2023-08-13T16:00","2023-08-13T17:00","2023-08-13T18:00","2023-08-13T19:00","2023-08-13T20:00","2023-08-13T21:00","2023-08-13T22:00","2023-08-13T23:00"],"temperature_2m":[53.0,51.2,50.9,50.4,50.7,51.3,51.7,52.9,54.3,56.1,57.4,59.3,59.1,60.7,59.7,58.8,58.8,57.8,56.6,55.3,53.9,52.7,52.9,53.2,52.0,51.8,51.3,50.7,50.8,51.5,53.9,57.7,61.2,63.2,64.7,66.6,67.5,67.0,68.7,68.7,67.9,66.2,64.4,61.4,59.8,58.9,57.9,56.3,55.7,55.3,55.5,55.4,55.7,56.5,57.6,58.8,59.7,59.1,58.9,60.6,59.9,59.8,59.9,61.7,63.2,63.6,62.3,58.9,57.3,57.1,57.0,56.5,56.2,56.0 | Open In Colab | Open In Colab ->: 23-08-10T11:00","2023-08-10T12:00","2023-08-10T13:00","2023-08-10T14:00","2023-08-10T15:00","2023-08-10T16:00","2023-08-10T17:00","2023-08-10T18:00","2023-08-10T19:00","2023-08-10T20:00","2023-08-10T21:00","2023-08-10T22:00","2023-08-10T23:00","2023-08-11T00:00","2023-08-11T01:00","2023-08-11T02:00","2023-08-11T03:00","2023-08-11T04:00","2023-08-11T05:00","2023-08-11T06:00","2023-08-11T07:00","2023-08-11T08:00","2023-08-11T09:00","2023-08-11T10:00","2023-08-11T11:00","2023-08-11T12:00","2023-08-11T13:00","2023-08-11T14:00","2023-08-11T15:00","2023-08-11T16:00","2023-08-11T17:00","2023-08-11T18:00","2023-08-11T19:00","2023-08-11T20:00","2023-08-11T21:00","2023-08-11T22:00","2023-08-11T23:00","2023-08-12T00:00","2023-08-12T01:00","2023-08-12T02:00","2023-08-12T03:00","2023-08-12T04:00","2023-08-12T05:00","2023-08-12T06:00","2023-08-12T07:00","2023-08-12T08:00","2023-08-12T09:00","2023-08-12T10:00","2023-08-12T11:00","2023-08-12T12:00","2023-08-12T13:00","2023-08-12T14:00","2023-08-12T15:00","2023-08-12T16:00","2023-08-12T17:00","2023-08-12T18:00","2023-08-12T19:00","2023-08-12T20:00","2023-08-12T21:00","2023-08-12T22:00","2023-08-12T23:00","2023-08-13T00:00","2023-08-13T01:00","2023-08-13T02:00","2023-08-13T03:00","2023-08-13T04:00","2023-08-13T05:00","2023-08-13T06:00","2023-08-13T07:00","2023-08-13T08:00","2023-08-13T09:00","2023-08-13T10:00","2023-08-13T11:00","2023-08-13T12:00","2023-08-13T13:00","2023-08-13T14:00","2023-08-13T15:00","2023-08-13T16:00","2023-08-13T17:00","2023-08-13T18:00","2023-08-13T19:00","2023-08-13T20:00","2023-08-13T21:00","2023-08-13T22:00","2023-08-13T23:00"],"temperature_2m":[53.0,51.2,50.9,50.4,50.7,51.3,51.7,52.9,54.3,56.1,57.4,59.3,59.1,60.7,59.7,58.8,58.8,57.8,56.6,55.3,53.9,52.7,52.9,53.2,52.0,51.8,51.3,50.7,50.8,51.5,53.9,57.7,61.2,63.2,64.7,66.6,67.5,67.0,68.7,68.7,67.9,66.2,64.4,61.4,59.8,58.9,57.9,56.3,55.7,55.3,55.5,55.4,55.7,56.5,57.6,58.8,59.7,59.1,58.9,60.6,59.9,59.8,59.9,61.7,63.2,63.6,62.3,58.9,57.3,57.1,57.0,56.5,56.2,56.0 |
1,359 | ,63.2,63.6,62.3,58.9,57.3,57.1,57.0,56.5,56.2,56.0,55.3,54.7,54.4,55.2,57.8,60.7,63.0,65.3,66.9,68.2,70.1,72.1,72.6,71.4,69.7,68.6,66.2,63.6,61.8,60.6,59.6,58.9,58.0,57.1,56.3,56.2,56.7,57.9,59.9,63.7,68.4,72.4,75.0,76.8,78.0,78.7,78.9,78.4,76.9,74.8,72.5,70.1,67.6,65.6,64.4,63.9,63.4,62.7,62.2,62.1,62.5,63.4,65.1,68.0,71.7,74.8,76.8,78.2,79.1,79.6,79.7,79.2,77.6,75.3,73.7,68.6,66.8,65.3,64.2,63.4,62.6,61.7,60.9,60.6,60.9,61.6,63.2,65.9,69.3,72.2,74.4,76.2,77.6,78.8,79.6,79.6,78.4,76.4,74.3,72.3,70.4,68.7,67.6,66.8]}} | Open In Colab | Open In Colab ->: ,63.2,63.6,62.3,58.9,57.3,57.1,57.0,56.5,56.2,56.0,55.3,54.7,54.4,55.2,57.8,60.7,63.0,65.3,66.9,68.2,70.1,72.1,72.6,71.4,69.7,68.6,66.2,63.6,61.8,60.6,59.6,58.9,58.0,57.1,56.3,56.2,56.7,57.9,59.9,63.7,68.4,72.4,75.0,76.8,78.0,78.7,78.9,78.4,76.9,74.8,72.5,70.1,67.6,65.6,64.4,63.9,63.4,62.7,62.2,62.1,62.5,63.4,65.1,68.0,71.7,74.8,76.8,78.2,79.1,79.6,79.7,79.2,77.6,75.3,73.7,68.6,66.8,65.3,64.2,63.4,62.6,61.7,60.9,60.6,60.9,61.6,63.2,65.9,69.3,72.2,74.4,76.2,77.6,78.8,79.6,79.6,78.4,76.4,74.3,72.3,70.4,68.7,67.6,66.8]}} |
1,360 | > Finished chain. ' The current temperature in Munich, Germany is 52.9°F.'Note that we supply information about the API:open_meteo_docs.OPEN_METEO_DOCS[0:500] 'BASE URL: https://api.open-meteo.com/\n\nAPI Documentation\nThe API endpoint /v1/forecast accepts a geographical coordinate, a list of weather variables and responds with a JSON hourly weather forecast for 7 days. Time always starts at 0:00 today and contains 168 hours. All URL parameters are listed below:\n\nParameter\tFormat\tRequired\tDefault\tDescription\nlatitude, longitude\tFloating point\tYes\t\tGeographical WGS84 coordinate of the location\nhourly\tString array\tNo\t\tA list of weather variables which shou'Under the hood, we do two things:api_request_chain: Generate an API URL based on the input question and the api_docsapi_answer_chain: generate a final answer based on the API responseWe can look at the LangSmith trace to inspect this:The api_request_chain produces the API url from our question and the API documentation:Here we make the API request with the API url.The api_answer_chain takes the response from the API and provides us with a natural language response:Going deeper​Test with other APIsimport osos.environ['TMDB_BEARER_TOKEN'] = ""from langchain.chains.api import tmdb_docsheaders = {"Authorization": f"Bearer {os.environ['TMDB_BEARER_TOKEN']}"}chain = APIChain.from_llm_and_api_docs(llm, tmdb_docs.TMDB_DOCS, headers=headers, verbose=True)chain.run("Search for 'Avatar'")import osfrom langchain.llms import OpenAIfrom langchain.chains.api import podcast_docsfrom langchain.chains import APIChain listen_api_key = 'xxx' # Get api key here: https://www.listennotes.com/api/pricing/llm = OpenAI(temperature=0)headers = {"X-ListenAPI-Key": listen_api_key}chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True)chain.run("Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results")Web | Open In Colab | Open In Colab ->: > Finished chain. ' The current temperature in Munich, Germany is 52.9°F.'Note that we supply information about the API:open_meteo_docs.OPEN_METEO_DOCS[0:500] 'BASE URL: https://api.open-meteo.com/\n\nAPI Documentation\nThe API endpoint /v1/forecast accepts a geographical coordinate, a list of weather variables and responds with a JSON hourly weather forecast for 7 days. Time always starts at 0:00 today and contains 168 hours. All URL parameters are listed below:\n\nParameter\tFormat\tRequired\tDefault\tDescription\nlatitude, longitude\tFloating point\tYes\t\tGeographical WGS84 coordinate of the location\nhourly\tString array\tNo\t\tA list of weather variables which shou'Under the hood, we do two things:api_request_chain: Generate an API URL based on the input question and the api_docsapi_answer_chain: generate a final answer based on the API responseWe can look at the LangSmith trace to inspect this:The api_request_chain produces the API url from our question and the API documentation:Here we make the API request with the API url.The api_answer_chain takes the response from the API and provides us with a natural language response:Going deeper​Test with other APIsimport osos.environ['TMDB_BEARER_TOKEN'] = ""from langchain.chains.api import tmdb_docsheaders = {"Authorization": f"Bearer {os.environ['TMDB_BEARER_TOKEN']}"}chain = APIChain.from_llm_and_api_docs(llm, tmdb_docs.TMDB_DOCS, headers=headers, verbose=True)chain.run("Search for 'Avatar'")import osfrom langchain.llms import OpenAIfrom langchain.chains.api import podcast_docsfrom langchain.chains import APIChain listen_api_key = 'xxx' # Get api key here: https://www.listennotes.com/api/pricing/llm = OpenAI(temperature=0)headers = {"X-ListenAPI-Key": listen_api_key}chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True)chain.run("Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results")Web |
1,361 | more than 30 minutes, return only 1 results")Web requestsURL requests are such a common use-case that we have the LLMRequestsChain, which makes an HTTP GET request. from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMRequestsChain, LLMChaintemplate = """Between >>> and <<< are the raw search result text from google.Extract the answer to the question '{query}' or say "not found" if the information is not contained.Use the formatExtracted:<answer or "not found">>>> {requests_result} <<<Extracted:"""PROMPT = PromptTemplate( input_variables=["query", "requests_result"], template=template,)chain = LLMRequestsChain(llm_chain=LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT))question = "What are the Three (3) biggest countries, and their respective sizes?"inputs = { "query": question, "url": "https://www.google.com/search?q=" + question.replace(" ", "+"),}chain(inputs) {'query': 'What are the Three (3) biggest countries, and their respective sizes?', 'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?', 'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), China (9,706,961 km²)'}PreviousRetrieve from vector stores directlyNextChatbotsUse caseOverviewQuickstartFunctionsAPI ChainGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Open In Colab | Open In Colab ->: more than 30 minutes, return only 1 results")Web requestsURL requests are such a common use-case that we have the LLMRequestsChain, which makes an HTTP GET request. from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMRequestsChain, LLMChaintemplate = """Between >>> and <<< are the raw search result text from google.Extract the answer to the question '{query}' or say "not found" if the information is not contained.Use the formatExtracted:<answer or "not found">>>> {requests_result} <<<Extracted:"""PROMPT = PromptTemplate( input_variables=["query", "requests_result"], template=template,)chain = LLMRequestsChain(llm_chain=LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT))question = "What are the Three (3) biggest countries, and their respective sizes?"inputs = { "query": question, "url": "https://www.google.com/search?q=" + question.replace(" ", "+"),}chain(inputs) {'query': 'What are the Three (3) biggest countries, and their respective sizes?', 'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?', 'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), China (9,706,961 km²)'}PreviousRetrieve from vector stores directlyNextChatbotsUse caseOverviewQuickstartFunctionsAPI ChainGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,362 | Retrieve from vector stores directly | ü¶úÔ∏èüîó Langchain | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. ->: Retrieve from vector stores directly | ü¶úÔ∏èüîó Langchain |
1,363 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Retrieve from vector stores directlyOn this pageRetrieve from vector stores directlyThis notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation.Prepare Data‚ÄãFirst, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.from langchain.llms import OpenAIfrom langchain.docstore.document import Documentimport requestsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.prompts import PromptTemplateimport pathlibimport subprocessimport tempfiledef get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Retrieve from vector stores directlyOn this pageRetrieve from vector stores directlyThis notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation.Prepare Data‚ÄãFirst, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.from langchain.llms import OpenAIfrom langchain.docstore.document import Documentimport requestsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.prompts import PromptTemplateimport pathlibimport subprocessimport tempfiledef get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() |
1,364 | .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url})sources = get_github_docs("yirenlu92", "deno-manual-forked")source_chunks = []splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(Document(page_content=chunk, metadata=source.metadata)) Cloning into '.'...Set Up Vector DB‚ÄãNow that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval.search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings())Set Up LLM Chain with Custom Prompt‚ÄãNext, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.from langchain.chains import LLMChainprompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:"""PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "topic"])llm = OpenAI(temperature=0)chain = LLMChain(llm=llm, prompt=PROMPT)Generate Text‚ÄãFinally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. ->: .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url})sources = get_github_docs("yirenlu92", "deno-manual-forked")source_chunks = []splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(Document(page_content=chunk, metadata=source.metadata)) Cloning into '.'...Set Up Vector DB‚ÄãNow that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval.search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings())Set Up LLM Chain with Custom Prompt‚ÄãNext, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.from langchain.chains import LLMChainprompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:"""PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "topic"])llm = OpenAI(temperature=0)chain = LLMChain(llm=llm, prompt=PROMPT)Generate Text‚ÄãFinally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple |
1,365 | and use them as additional context in our simple LLM chain.def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs))generate_blog_post("environment variables") [{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. ->: and use them as additional context in our simple LLM chain.def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs))generate_blog_post("environment variables") [{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this |
1,366 | running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. ->: running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the |
1,367 | `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}]PreviousCiting retrieval sourcesNextInteracting with APIsPrepare DataSet Up Vector DBSet Up LLM Chain with Custom PromptGenerate TextCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. | This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. ->: `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}]PreviousCiting retrieval sourcesNextInteracting with APIsPrepare DataSet Up Vector DBSet Up LLM Chain with Custom PromptGenerate TextCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,368 | Retrieving from multiple sources | ü¶úÔ∏èüîó Langchain | Often times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether! | Often times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether! ->: Retrieving from multiple sources | ü¶úÔ∏èüîó Langchain |
1,369 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Retrieving from multiple sourcesOn this pageRetrieving from multiple sourcesOften times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether!A key part is is doing as much of the retrieval in parallel as possible. This will keep the latency as low as possible. Luckily, LangChain Expression Language supports parallelism out of the box.Let's take a look where we do retrieval over a SQL database and a vectorstore.from langchain.chat_models import ChatOpenAISet up SQL query‚Äãfrom langchain.utilities import SQLDatabasefrom langchain.chains import create_sql_query_chaindb = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")query_chain = create_sql_query_chain(ChatOpenAI(temperature=0), db)Set up vectorstore‚Äãfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.schema.document import Documentindex_creator = VectorstoreIndexCreator()index = index_creator.from_documents([Document(page_content="Foo")])retriever = index.vectorstore.as_retriever()Combine‚Äãfrom langchain.prompts import ChatPromptTemplatesystem_message = """Use the information from the below two sources to answer any questions.Source 1: a SQL database about | Often times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether! | Often times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether! ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Retrieving from multiple sourcesOn this pageRetrieving from multiple sourcesOften times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether!A key part is is doing as much of the retrieval in parallel as possible. This will keep the latency as low as possible. Luckily, LangChain Expression Language supports parallelism out of the box.Let's take a look where we do retrieval over a SQL database and a vectorstore.from langchain.chat_models import ChatOpenAISet up SQL query‚Äãfrom langchain.utilities import SQLDatabasefrom langchain.chains import create_sql_query_chaindb = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")query_chain = create_sql_query_chain(ChatOpenAI(temperature=0), db)Set up vectorstore‚Äãfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.schema.document import Documentindex_creator = VectorstoreIndexCreator()index = index_creator.from_documents([Document(page_content="Foo")])retriever = index.vectorstore.as_retriever()Combine‚Äãfrom langchain.prompts import ChatPromptTemplatesystem_message = """Use the information from the below two sources to answer any questions.Source 1: a SQL database about |
1,370 | any questions.Source 1: a SQL database about employee data<source1>{source1}</source1>Source 2: a text database of random information<source2>{source2}</source2>"""prompt = ChatPromptTemplate.from_messages([("system", system_message), ("human", "{question}")])full_chain = { "source1": {"question": lambda x: x["question"]} | query_chain | db.run, "source2": (lambda x: x['question']) | retriever, "question": lambda x: x['question'],} | prompt | ChatOpenAI()response = full_chain.invoke({"question":"How many Employees are there"})print(response) Number of requested results 4 is greater than number of elements in index 1, updating n_results = 1 content='There are 8 employees.' additional_kwargs={} example=FalsePreviousDynamically select from multiple retrieversNextCiting retrieval sourcesSet up SQL querySet up vectorstoreCombineCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Often times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether! | Often times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether! ->: any questions.Source 1: a SQL database about employee data<source1>{source1}</source1>Source 2: a text database of random information<source2>{source2}</source2>"""prompt = ChatPromptTemplate.from_messages([("system", system_message), ("human", "{question}")])full_chain = { "source1": {"question": lambda x: x["question"]} | query_chain | db.run, "source2": (lambda x: x['question']) | retriever, "question": lambda x: x['question'],} | prompt | ChatOpenAI()response = full_chain.invoke({"question":"How many Employees are there"})print(response) Number of requested results 4 is greater than number of elements in index 1, updating n_results = 1 content='There are 8 employees.' additional_kwargs={} example=FalsePreviousDynamically select from multiple retrieversNextCiting retrieval sourcesSet up SQL querySet up vectorstoreCombineCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,371 | Dynamically select from multiple retrievers | ü¶úÔ∏èüîó Langchain | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. ->: Dynamically select from multiple retrievers | ü¶úÔ∏èüîó Langchain |
1,372 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Dynamically select from multiple retrieversDynamically select from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.from langchain.chains.router import MultiRetrievalQAChainfrom langchain.llms import OpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSsou_docs = TextLoader('../../state_of_the_union.txt').load_and_split()sou_retriever = FAISS.from_documents(sou_docs, OpenAIEmbeddings()).as_retriever()pg_docs = TextLoader('../../paul_graham_essay.txt').load_and_split()pg_retriever = FAISS.from_documents(pg_docs, OpenAIEmbeddings()).as_retriever()personal_texts = [ "I love apple pie", "My favorite color is fuchsia", "My dream is to become a professional dancer", "I broke my arm when I was 12", "My parents are from Peru",]personal_retriever = FAISS.from_texts(personal_texts, OpenAIEmbeddings()).as_retriever()retriever_infos = [ { "name": "state of the union", "description": "Good for answering | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Dynamically select from multiple retrieversDynamically select from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.from langchain.chains.router import MultiRetrievalQAChainfrom langchain.llms import OpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSsou_docs = TextLoader('../../state_of_the_union.txt').load_and_split()sou_retriever = FAISS.from_documents(sou_docs, OpenAIEmbeddings()).as_retriever()pg_docs = TextLoader('../../paul_graham_essay.txt').load_and_split()pg_retriever = FAISS.from_documents(pg_docs, OpenAIEmbeddings()).as_retriever()personal_texts = [ "I love apple pie", "My favorite color is fuchsia", "My dream is to become a professional dancer", "I broke my arm when I was 12", "My parents are from Peru",]personal_retriever = FAISS.from_texts(personal_texts, OpenAIEmbeddings()).as_retriever()retriever_infos = [ { "name": "state of the union", "description": "Good for answering |
1,373 | union", "description": "Good for answering questions about the 2023 State of the Union address", "retriever": sou_retriever }, { "name": "pg essay", "description": "Good for answering questions about Paul Graham's essay on his career", "retriever": pg_retriever }, { "name": "personal", "description": "Good for answering questions about me", "retriever": personal_retriever }]chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos, verbose=True)print(chain.run("What did the president say about the economy?")) > Entering new MultiRetrievalQAChain chain... state of the union: {'query': 'What did the president say about the economy in the 2023 State of the Union address?'} > Finished chain. The president said that the economy was stronger than it had been a year prior, and that the American Rescue Plan helped create record job growth and fuel economic relief for millions of Americans. He also proposed a plan to fight inflation and lower costs for families, including cutting the cost of prescription drugs and energy, providing investments and tax credits for energy efficiency, and increasing access to child care and Pre-K.print(chain.run("What is something Paul Graham regrets about his work?")) > Entering new MultiRetrievalQAChain chain... pg essay: {'query': 'What is something Paul Graham regrets about his work?'} > Finished chain. Paul Graham regrets that he did not take a vacation after selling his company, instead of immediately starting to paint.print(chain.run("What is my background?")) > Entering new MultiRetrievalQAChain chain... personal: {'query': 'What is my background?'} > Finished chain. Your background is Peruvian.print(chain.run("What year was the Internet created in?")) > Entering new MultiRetrievalQAChain chain... None: {'query': 'What year was the Internet created in?'} > Finished chain. The Internet was created in | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. ->: union", "description": "Good for answering questions about the 2023 State of the Union address", "retriever": sou_retriever }, { "name": "pg essay", "description": "Good for answering questions about Paul Graham's essay on his career", "retriever": pg_retriever }, { "name": "personal", "description": "Good for answering questions about me", "retriever": personal_retriever }]chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos, verbose=True)print(chain.run("What did the president say about the economy?")) > Entering new MultiRetrievalQAChain chain... state of the union: {'query': 'What did the president say about the economy in the 2023 State of the Union address?'} > Finished chain. The president said that the economy was stronger than it had been a year prior, and that the American Rescue Plan helped create record job growth and fuel economic relief for millions of Americans. He also proposed a plan to fight inflation and lower costs for families, including cutting the cost of prescription drugs and energy, providing investments and tax credits for energy efficiency, and increasing access to child care and Pre-K.print(chain.run("What is something Paul Graham regrets about his work?")) > Entering new MultiRetrievalQAChain chain... pg essay: {'query': 'What is something Paul Graham regrets about his work?'} > Finished chain. Paul Graham regrets that he did not take a vacation after selling his company, instead of immediately starting to paint.print(chain.run("What is my background?")) > Entering new MultiRetrievalQAChain chain... personal: {'query': 'What is my background?'} > Finished chain. Your background is Peruvian.print(chain.run("What year was the Internet created in?")) > Entering new MultiRetrievalQAChain chain... None: {'query': 'What year was the Internet created in?'} > Finished chain. The Internet was created in |
1,374 | > Finished chain. The Internet was created in 1969 through a project called ARPANET, which was funded by the United States Department of Defense. However, the World Wide Web, which is often confused with the Internet, was created in 1989 by British computer scientist Tim Berners-Lee.PreviousRAG using local modelsNextRetrieving from multiple sourcesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. | This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it. ->: > Finished chain. The Internet was created in 1969 through a project called ARPANET, which was funded by the United States Department of Defense. However, the World Wide Web, which is often confused with the Internet, was created in 1989 by British computer scientist Tim Berners-Lee.PreviousRAG using local modelsNextRetrieving from multiple sourcesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,375 | RAG over code | ü¶úÔ∏èüîó Langchain | Open In Collab | Open In Collab ->: RAG over code | ü¶úÔ∏èüîó Langchain |
1,376 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)RAG over codeOn this pageRAG over codeUse case‚ÄãSource code analysis is one of the most popular LLM applications (e.g., GitHub Co-Pilot, Code Interpreter, Codium, and Codeium) for use-cases such as:Q&A over the code base to understand how it worksUsing LLMs for suggesting refactors or improvementsUsing LLMs for documenting the codeOverview‚ÄãThe pipeline for QA over code follows the steps we do for document question answering, with some differences:In particular, we can employ a splitting strategy that does a few things:Keeps each top-level function and class in the code is loaded into separate documents. Puts remaining into a separate document.Retains metadata about where each split comes fromQuickstart‚Äãpip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()We'll follow the structure of this notebook and employ context aware code splitting.Loading‚ÄãWe will upload all python project files using the langchain.document_loaders.TextLoader.The following script iterates over the files in the LangChain repository and loads every .py file (a.k.a. documents):# from git import Repofrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParser# | Open In Collab | Open In Collab ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)RAG over codeOn this pageRAG over codeUse case‚ÄãSource code analysis is one of the most popular LLM applications (e.g., GitHub Co-Pilot, Code Interpreter, Codium, and Codeium) for use-cases such as:Q&A over the code base to understand how it worksUsing LLMs for suggesting refactors or improvementsUsing LLMs for documenting the codeOverview‚ÄãThe pipeline for QA over code follows the steps we do for document question answering, with some differences:In particular, we can employ a splitting strategy that does a few things:Keeps each top-level function and class in the code is loaded into separate documents. Puts remaining into a separate document.Retains metadata about where each split comes fromQuickstart‚Äãpip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()We'll follow the structure of this notebook and employ context aware code splitting.Loading‚ÄãWe will upload all python project files using the langchain.document_loaders.TextLoader.The following script iterates over the files in the LangChain repository and loads every .py file (a.k.a. documents):# from git import Repofrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParser# |
1,377 | import LanguageParser# Clonerepo_path = "/Users/rlm/Desktop/test_repo"# repo = Repo.clone_from("https://github.com/langchain-ai/langchain", to_path=repo_path)We load the py code using LanguageParser, which will:Keep top-level functions and classes together (into a single document)Put remaining code into a separate documentRetains metadata about where each split comes from# Loadloader = GenericLoader.from_filesystem( repo_path+"/libs/langchain/langchain", glob="**/*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=500))documents = loader.load()len(documents) 1293Splitting‚ÄãSplit the Document into chunks for embedding and vector storage.We can use RecursiveCharacterTextSplitter w/ language specified.from langchain.text_splitter import RecursiveCharacterTextSplitterpython_splitter = RecursiveCharacterTextSplitter.from_language(language=Language.PYTHON, chunk_size=2000, chunk_overlap=200)texts = python_splitter.split_documents(documents)len(texts) 3748RetrievalQA‚ÄãWe need to store the documents in a way we can semantically search for their content. The most common approach is to embed the contents of each document then store the embedding and document in a vector store. When setting up the vectorstore retriever:We test max marginal relevance for retrievalAnd 8 documents returnedGo deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.from langchain.vectorstores import Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsdb = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()))retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8},)Chat‚ÄãTest chat, just as we do for | Open In Collab | Open In Collab ->: import LanguageParser# Clonerepo_path = "/Users/rlm/Desktop/test_repo"# repo = Repo.clone_from("https://github.com/langchain-ai/langchain", to_path=repo_path)We load the py code using LanguageParser, which will:Keep top-level functions and classes together (into a single document)Put remaining code into a separate documentRetains metadata about where each split comes from# Loadloader = GenericLoader.from_filesystem( repo_path+"/libs/langchain/langchain", glob="**/*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=500))documents = loader.load()len(documents) 1293Splitting‚ÄãSplit the Document into chunks for embedding and vector storage.We can use RecursiveCharacterTextSplitter w/ language specified.from langchain.text_splitter import RecursiveCharacterTextSplitterpython_splitter = RecursiveCharacterTextSplitter.from_language(language=Language.PYTHON, chunk_size=2000, chunk_overlap=200)texts = python_splitter.split_documents(documents)len(texts) 3748RetrievalQA‚ÄãWe need to store the documents in a way we can semantically search for their content. The most common approach is to embed the contents of each document then store the embedding and document in a vector store. When setting up the vectorstore retriever:We test max marginal relevance for retrievalAnd 8 documents returnedGo deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.from langchain.vectorstores import Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsdb = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()))retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8},)Chat‚ÄãTest chat, just as we do for |
1,378 | 8},)Chat‚ÄãTest chat, just as we do for chatbots.Go deeper‚ÄãBrowse the > 55 LLM and chat model integrations here.See further documentation on LLMs and chat models here.Use local LLMS: The popularity of PrivateGPT and GPT4All underscore the importance of running LLMs locally.from langchain.chat_models import ChatOpenAIfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI(model_name="gpt-4") memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)question = "How can I initialize a ReAct agent?"result = qa(question)result['answer'] 'To initialize a ReAct agent, you need to follow these steps:\n\n1. Initialize a language model `llm` of type `BaseLanguageModel`.\n\n2. Initialize a document store `docstore` of type `Docstore`.\n\n3. Create a `DocstoreExplorer` with the initialized `docstore`. The `DocstoreExplorer` is used to search for and look up terms in the document store.\n\n4. Create an array of `Tool` objects. The `Tool` objects represent the actions that the agent can perform. In the case of `ReActDocstoreAgent`, the tools must be "Search" and "Lookup" with their corresponding functions from the `DocstoreExplorer`.\n\n5. Initialize the `ReActDocstoreAgent` using the `from_llm_and_tools` method with the `llm` (language model) and `tools` as parameters.\n\n6. Initialize the `ReActChain` (which is the `AgentExecutor`) using the `ReActDocstoreAgent` and `tools` as parameters.\n\nHere is an example of how to do this:\n\n```python\nfrom langchain.chains import ReActChain, OpenAI\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.tools.base import BaseTool\n\n# Initialize the LLM and a docstore\nllm = OpenAI()\ndocstore = Docstore()\n\ndocstore_explorer = DocstoreExplorer(docstore)\ntools = [\n Tool(\n | Open In Collab | Open In Collab ->: 8},)Chat‚ÄãTest chat, just as we do for chatbots.Go deeper‚ÄãBrowse the > 55 LLM and chat model integrations here.See further documentation on LLMs and chat models here.Use local LLMS: The popularity of PrivateGPT and GPT4All underscore the importance of running LLMs locally.from langchain.chat_models import ChatOpenAIfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI(model_name="gpt-4") memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)question = "How can I initialize a ReAct agent?"result = qa(question)result['answer'] 'To initialize a ReAct agent, you need to follow these steps:\n\n1. Initialize a language model `llm` of type `BaseLanguageModel`.\n\n2. Initialize a document store `docstore` of type `Docstore`.\n\n3. Create a `DocstoreExplorer` with the initialized `docstore`. The `DocstoreExplorer` is used to search for and look up terms in the document store.\n\n4. Create an array of `Tool` objects. The `Tool` objects represent the actions that the agent can perform. In the case of `ReActDocstoreAgent`, the tools must be "Search" and "Lookup" with their corresponding functions from the `DocstoreExplorer`.\n\n5. Initialize the `ReActDocstoreAgent` using the `from_llm_and_tools` method with the `llm` (language model) and `tools` as parameters.\n\n6. Initialize the `ReActChain` (which is the `AgentExecutor`) using the `ReActDocstoreAgent` and `tools` as parameters.\n\nHere is an example of how to do this:\n\n```python\nfrom langchain.chains import ReActChain, OpenAI\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.tools.base import BaseTool\n\n# Initialize the LLM and a docstore\nllm = OpenAI()\ndocstore = Docstore()\n\ndocstore_explorer = DocstoreExplorer(docstore)\ntools = [\n Tool(\n |
1,379 | = [\n Tool(\n name="Search",\n func=docstore_explorer.search,\n description="Search for a term in the docstore.",\n ),\n Tool(\n name="Lookup",\n func=docstore_explorer.lookup,\n description="Lookup a term in the docstore.",\n ),\n]\nagent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\nreact = ReActChain(agent=agent, tools=tools)\n```\n\nKeep in mind that this is a simplified example and you might need to adapt it to your specific needs.'questions = [ "What is the class hierarchy?", "What classes are derived from the Chain class?", "What one improvement do you propose in code in relation to the class hierarchy for the Chain class?",]for question in questions: result = qa(question) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is the class hierarchy? **Answer**: The class hierarchy in object-oriented programming is the structure that forms when classes are derived from other classes. The derived class is a subclass of the base class also known as the superclass. This hierarchy is formed based on the concept of inheritance in object-oriented programming where a subclass inherits the properties and functionalities of the superclass. In the given context, we have the following examples of class hierarchies: 1. `BaseCallbackHandler --> <name>CallbackHandler` means `BaseCallbackHandler` is a base class and `<name>CallbackHandler` (like `AimCallbackHandler`, `ArgillaCallbackHandler` etc.) are derived classes that inherit from `BaseCallbackHandler`. 2. `BaseLoader --> <name>Loader` means `BaseLoader` is a base class and `<name>Loader` (like `TextLoader`, `UnstructuredFileLoader` etc.) are derived classes that inherit from `BaseLoader`. 3. `ToolMetaclass --> BaseTool --> <name>Tool` means `ToolMetaclass` is a base class, `BaseTool` is a derived class that inherits from `ToolMetaclass`, and | Open In Collab | Open In Collab ->: = [\n Tool(\n name="Search",\n func=docstore_explorer.search,\n description="Search for a term in the docstore.",\n ),\n Tool(\n name="Lookup",\n func=docstore_explorer.lookup,\n description="Lookup a term in the docstore.",\n ),\n]\nagent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\nreact = ReActChain(agent=agent, tools=tools)\n```\n\nKeep in mind that this is a simplified example and you might need to adapt it to your specific needs.'questions = [ "What is the class hierarchy?", "What classes are derived from the Chain class?", "What one improvement do you propose in code in relation to the class hierarchy for the Chain class?",]for question in questions: result = qa(question) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is the class hierarchy? **Answer**: The class hierarchy in object-oriented programming is the structure that forms when classes are derived from other classes. The derived class is a subclass of the base class also known as the superclass. This hierarchy is formed based on the concept of inheritance in object-oriented programming where a subclass inherits the properties and functionalities of the superclass. In the given context, we have the following examples of class hierarchies: 1. `BaseCallbackHandler --> <name>CallbackHandler` means `BaseCallbackHandler` is a base class and `<name>CallbackHandler` (like `AimCallbackHandler`, `ArgillaCallbackHandler` etc.) are derived classes that inherit from `BaseCallbackHandler`. 2. `BaseLoader --> <name>Loader` means `BaseLoader` is a base class and `<name>Loader` (like `TextLoader`, `UnstructuredFileLoader` etc.) are derived classes that inherit from `BaseLoader`. 3. `ToolMetaclass --> BaseTool --> <name>Tool` means `ToolMetaclass` is a base class, `BaseTool` is a derived class that inherits from `ToolMetaclass`, and |
1,380 | class that inherits from `ToolMetaclass`, and `<name>Tool` (like `AIPluginTool`, `BaseGraphQLTool` etc.) are further derived classes that inherit from `BaseTool`. -> **Question**: What classes are derived from the Chain class? **Answer**: The classes that are derived from the Chain class are: 1. LLMSummarizationCheckerChain 2. MapReduceChain 3. OpenAIModerationChain 4. NatBotChain 5. QAGenerationChain 6. QAWithSourcesChain 7. RetrievalQAWithSourcesChain 8. VectorDBQAWithSourcesChain 9. RetrievalQA 10. VectorDBQA 11. LLMRouterChain 12. MultiPromptChain 13. MultiRetrievalQAChain 14. MultiRouteChain 15. RouterChain 16. SequentialChain 17. SimpleSequentialChain 18. TransformChain 19. BaseConversationalRetrievalChain 20. ConstitutionalChain -> **Question**: What one improvement do you propose in code in relation to the class hierarchy for the Chain class? **Answer**: As an AI model, I don't have personal opinions. However, one suggestion could be to improve the documentation of the Chain class hierarchy. The current comments and docstrings provide some details but it could be helpful to include more explicit explanations about the hierarchy, roles of each subclass, and their relationships with one another. Also, incorporating UML diagrams or other visuals could help developers better understand the structure and interactions of the classes. The can look at the LangSmith trace to see what is happening under the hood:In particular, the code well structured and kept together in the retrieval outputThe retrieved code and chat history are passed to the LLM for answer distillationOpen source LLMs‚ÄãWe can use Code LLaMA via LLamaCPP or Ollama integration.Note: be sure to upgrade llama-cpp-python in order to use the new gguf file format.CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama2/bin/pip install -U llama-cpp-python --no-cache-dirCheck out the | Open In Collab | Open In Collab ->: class that inherits from `ToolMetaclass`, and `<name>Tool` (like `AIPluginTool`, `BaseGraphQLTool` etc.) are further derived classes that inherit from `BaseTool`. -> **Question**: What classes are derived from the Chain class? **Answer**: The classes that are derived from the Chain class are: 1. LLMSummarizationCheckerChain 2. MapReduceChain 3. OpenAIModerationChain 4. NatBotChain 5. QAGenerationChain 6. QAWithSourcesChain 7. RetrievalQAWithSourcesChain 8. VectorDBQAWithSourcesChain 9. RetrievalQA 10. VectorDBQA 11. LLMRouterChain 12. MultiPromptChain 13. MultiRetrievalQAChain 14. MultiRouteChain 15. RouterChain 16. SequentialChain 17. SimpleSequentialChain 18. TransformChain 19. BaseConversationalRetrievalChain 20. ConstitutionalChain -> **Question**: What one improvement do you propose in code in relation to the class hierarchy for the Chain class? **Answer**: As an AI model, I don't have personal opinions. However, one suggestion could be to improve the documentation of the Chain class hierarchy. The current comments and docstrings provide some details but it could be helpful to include more explicit explanations about the hierarchy, roles of each subclass, and their relationships with one another. Also, incorporating UML diagrams or other visuals could help developers better understand the structure and interactions of the classes. The can look at the LangSmith trace to see what is happening under the hood:In particular, the code well structured and kept together in the retrieval outputThe retrieved code and chat history are passed to the LLM for answer distillationOpen source LLMs‚ÄãWe can use Code LLaMA via LLamaCPP or Ollama integration.Note: be sure to upgrade llama-cpp-python in order to use the new gguf file format.CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama2/bin/pip install -U llama-cpp-python --no-cache-dirCheck out the |
1,381 | -U llama-cpp-python --no-cache-dirCheck out the latest code-llama models here.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChain from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlercallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf", n_ctx=5000, n_gpu_layers=1, n_batch=512, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,) llama_model_loader: loaded meta data with 17 key-value pairs and 363 tensors from /Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf (version GGUF V1 (latest)) llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 1: output_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 2: output.weight f16 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 4: blk.0.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 5: blk.0.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 6: blk.0.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 8: blk.0.ffn_down.weight q6_K [ 13824, 5120, | Open In Collab | Open In Collab ->: -U llama-cpp-python --no-cache-dirCheck out the latest code-llama models here.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChain from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlercallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf", n_ctx=5000, n_gpu_layers=1, n_batch=512, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,) llama_model_loader: loaded meta data with 17 key-value pairs and 363 tensors from /Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf (version GGUF V1 (latest)) llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 1: output_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 2: output.weight f16 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 4: blk.0.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 5: blk.0.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 6: blk.0.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 8: blk.0.ffn_down.weight q6_K [ 13824, 5120, |
1,382 | q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 10: blk.0.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 11: blk.0.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 12: blk.1.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 13: blk.1.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 14: blk.1.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 15: blk.1.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 16: blk.1.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 17: blk.1.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 18: blk.1.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 19: blk.1.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 20: blk.1.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 21: blk.2.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 22: blk.2.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 23: blk.2.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 24: blk.2.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 25: blk.2.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 26: blk.2.ffn_down.weight | Open In Collab | Open In Collab ->: q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 10: blk.0.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 11: blk.0.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 12: blk.1.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 13: blk.1.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 14: blk.1.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 15: blk.1.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 16: blk.1.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 17: blk.1.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 18: blk.1.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 19: blk.1.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 20: blk.1.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 21: blk.2.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 22: blk.2.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 23: blk.2.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 24: blk.2.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 25: blk.2.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 26: blk.2.ffn_down.weight |
1,383 | - tensor 26: blk.2.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 27: blk.2.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 28: blk.2.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 29: blk.2.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 30: blk.3.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 31: blk.3.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 32: blk.3.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 33: blk.3.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 34: blk.3.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 35: blk.3.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 36: blk.3.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 37: blk.3.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 38: blk.3.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 39: blk.4.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 40: blk.4.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 41: blk.4.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 42: blk.4.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 43: blk.4.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - | Open In Collab | Open In Collab ->: - tensor 26: blk.2.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 27: blk.2.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 28: blk.2.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 29: blk.2.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 30: blk.3.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 31: blk.3.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 32: blk.3.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 33: blk.3.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 34: blk.3.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 35: blk.3.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 36: blk.3.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 37: blk.3.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 38: blk.3.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 39: blk.4.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 40: blk.4.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 41: blk.4.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 42: blk.4.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 43: blk.4.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - |
1,384 | 13824, 1, 1 ] llama_model_loader: - tensor 44: blk.4.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 45: blk.4.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 46: blk.4.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 47: blk.4.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 48: blk.5.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 49: blk.5.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 50: blk.5.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 51: blk.5.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 52: blk.5.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 53: blk.5.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 54: blk.5.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 55: blk.5.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 56: blk.5.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 57: blk.6.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 58: blk.6.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 59: blk.6.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 60: blk.6.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 61: blk.6.ffn_gate.weight q4_K [ 5120, | Open In Collab | Open In Collab ->: 13824, 1, 1 ] llama_model_loader: - tensor 44: blk.4.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 45: blk.4.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 46: blk.4.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 47: blk.4.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 48: blk.5.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 49: blk.5.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 50: blk.5.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 51: blk.5.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 52: blk.5.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 53: blk.5.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 54: blk.5.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 55: blk.5.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 56: blk.5.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 57: blk.6.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 58: blk.6.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 59: blk.6.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 60: blk.6.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 61: blk.6.ffn_gate.weight q4_K [ 5120, |
1,385 | blk.6.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 62: blk.6.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 63: blk.6.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 64: blk.6.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 65: blk.6.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 66: blk.7.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 67: blk.7.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 68: blk.7.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 69: blk.7.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 70: blk.7.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 71: blk.7.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 72: blk.7.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 73: blk.7.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 74: blk.7.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 75: blk.8.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 76: blk.8.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 77: blk.8.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 78: blk.8.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 79: | Open In Collab | Open In Collab ->: blk.6.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 62: blk.6.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 63: blk.6.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 64: blk.6.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 65: blk.6.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 66: blk.7.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 67: blk.7.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 68: blk.7.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 69: blk.7.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 70: blk.7.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 71: blk.7.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 72: blk.7.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 73: blk.7.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 74: blk.7.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 75: blk.8.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 76: blk.8.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 77: blk.8.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 78: blk.8.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 79: |
1,386 | 1 ] llama_model_loader: - tensor 79: blk.8.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 80: blk.8.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 81: blk.8.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 82: blk.8.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 83: blk.8.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 84: blk.9.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 85: blk.9.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 86: blk.9.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 87: blk.9.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 88: blk.9.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 89: blk.9.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 90: blk.9.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 91: blk.9.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 92: blk.9.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 93: blk.10.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 94: blk.10.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 95: blk.10.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 96: blk.10.attn_output.weight q4_K [ 5120, 5120, 1, | Open In Collab | Open In Collab ->: 1 ] llama_model_loader: - tensor 79: blk.8.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 80: blk.8.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 81: blk.8.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 82: blk.8.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 83: blk.8.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 84: blk.9.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 85: blk.9.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 86: blk.9.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 87: blk.9.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 88: blk.9.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 89: blk.9.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 90: blk.9.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 91: blk.9.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 92: blk.9.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 93: blk.10.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 94: blk.10.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 95: blk.10.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 96: blk.10.attn_output.weight q4_K [ 5120, 5120, 1, |
1,387 | q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 97: blk.10.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 98: blk.10.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 99: blk.10.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 100: blk.10.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 101: blk.10.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 102: blk.11.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 103: blk.11.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 104: blk.11.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 105: blk.11.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 106: blk.11.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 107: blk.11.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 108: blk.11.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 109: blk.11.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 110: blk.11.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 111: blk.12.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 112: blk.12.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 113: blk.12.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 114: blk.12.attn_output.weight | Open In Collab | Open In Collab ->: q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 97: blk.10.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 98: blk.10.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 99: blk.10.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 100: blk.10.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 101: blk.10.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 102: blk.11.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 103: blk.11.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 104: blk.11.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 105: blk.11.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 106: blk.11.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 107: blk.11.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 108: blk.11.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 109: blk.11.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 110: blk.11.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 111: blk.12.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 112: blk.12.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 113: blk.12.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 114: blk.12.attn_output.weight |
1,388 | - tensor 114: blk.12.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 115: blk.12.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 116: blk.12.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 117: blk.12.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 118: blk.12.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 119: blk.12.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 120: blk.13.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 121: blk.13.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 122: blk.13.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 123: blk.13.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 124: blk.13.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 125: blk.13.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 126: blk.13.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 127: blk.13.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 128: blk.13.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 129: blk.14.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 130: blk.14.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 131: blk.14.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - | Open In Collab | Open In Collab ->: - tensor 114: blk.12.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 115: blk.12.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 116: blk.12.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 117: blk.12.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 118: blk.12.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 119: blk.12.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 120: blk.13.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 121: blk.13.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 122: blk.13.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 123: blk.13.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 124: blk.13.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 125: blk.13.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 126: blk.13.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 127: blk.13.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 128: blk.13.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 129: blk.14.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 130: blk.14.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 131: blk.14.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - |
1,389 | 5120, 1, 1 ] llama_model_loader: - tensor 132: blk.14.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 133: blk.14.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 134: blk.14.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 135: blk.14.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 136: blk.14.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 137: blk.14.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 138: blk.15.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 139: blk.15.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 140: blk.15.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 141: blk.15.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 142: blk.15.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 143: blk.15.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 144: blk.15.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 145: blk.15.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 146: blk.15.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 147: blk.16.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 148: blk.16.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 149: blk.16.attn_v.weight q6_K [ 5120, | Open In Collab | Open In Collab ->: 5120, 1, 1 ] llama_model_loader: - tensor 132: blk.14.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 133: blk.14.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 134: blk.14.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 135: blk.14.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 136: blk.14.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 137: blk.14.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 138: blk.15.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 139: blk.15.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 140: blk.15.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 141: blk.15.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 142: blk.15.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 143: blk.15.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 144: blk.15.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 145: blk.15.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 146: blk.15.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 147: blk.16.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 148: blk.16.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 149: blk.16.attn_v.weight q6_K [ 5120, |
1,390 | blk.16.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 150: blk.16.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 151: blk.16.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 152: blk.16.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 153: blk.16.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 154: blk.16.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 155: blk.16.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 156: blk.17.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 157: blk.17.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 158: blk.17.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 159: blk.17.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 160: blk.17.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 161: blk.17.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 162: blk.17.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 163: blk.17.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 164: blk.17.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 165: blk.18.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 166: blk.18.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 167: | Open In Collab | Open In Collab ->: blk.16.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 150: blk.16.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 151: blk.16.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 152: blk.16.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 153: blk.16.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 154: blk.16.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 155: blk.16.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 156: blk.17.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 157: blk.17.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 158: blk.17.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 159: blk.17.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 160: blk.17.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 161: blk.17.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 162: blk.17.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 163: blk.17.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 164: blk.17.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 165: blk.18.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 166: blk.18.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 167: |
1,391 | 1 ] llama_model_loader: - tensor 167: blk.18.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 168: blk.18.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 169: blk.18.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 170: blk.18.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 171: blk.18.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 172: blk.18.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 173: blk.18.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 174: blk.19.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 175: blk.19.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 176: blk.19.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 177: blk.19.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 178: blk.19.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 179: blk.19.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 180: blk.19.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 181: blk.19.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 182: blk.19.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 183: blk.20.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 184: blk.20.attn_k.weight q4_K [ 5120, 5120, 1, | Open In Collab | Open In Collab ->: 1 ] llama_model_loader: - tensor 167: blk.18.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 168: blk.18.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 169: blk.18.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 170: blk.18.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 171: blk.18.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 172: blk.18.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 173: blk.18.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 174: blk.19.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 175: blk.19.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 176: blk.19.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 177: blk.19.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 178: blk.19.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 179: blk.19.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 180: blk.19.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 181: blk.19.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 182: blk.19.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 183: blk.20.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 184: blk.20.attn_k.weight q4_K [ 5120, 5120, 1, |
1,392 | q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 185: blk.20.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 186: blk.20.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 187: blk.20.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 188: blk.20.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 189: blk.20.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 190: blk.20.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 191: blk.20.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 192: blk.21.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 193: blk.21.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 194: blk.21.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 195: blk.21.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 196: blk.21.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 197: blk.21.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 198: blk.21.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 199: blk.21.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 200: blk.21.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 201: blk.22.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 202: blk.22.attn_k.weight | Open In Collab | Open In Collab ->: q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 185: blk.20.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 186: blk.20.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 187: blk.20.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 188: blk.20.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 189: blk.20.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 190: blk.20.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 191: blk.20.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 192: blk.21.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 193: blk.21.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 194: blk.21.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 195: blk.21.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 196: blk.21.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 197: blk.21.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 198: blk.21.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 199: blk.21.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 200: blk.21.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 201: blk.22.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 202: blk.22.attn_k.weight |
1,393 | - tensor 202: blk.22.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 203: blk.22.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 204: blk.22.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 205: blk.22.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 206: blk.22.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 207: blk.22.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 208: blk.22.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 209: blk.22.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 210: blk.23.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 211: blk.23.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 212: blk.23.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 213: blk.23.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 214: blk.23.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 215: blk.23.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 216: blk.23.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 217: blk.23.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 218: blk.23.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 219: blk.24.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - | Open In Collab | Open In Collab ->: - tensor 202: blk.22.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 203: blk.22.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 204: blk.22.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 205: blk.22.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 206: blk.22.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 207: blk.22.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 208: blk.22.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 209: blk.22.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 210: blk.23.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 211: blk.23.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 212: blk.23.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 213: blk.23.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 214: blk.23.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 215: blk.23.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 216: blk.23.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 217: blk.23.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 218: blk.23.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 219: blk.24.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - |
1,394 | 5120, 1, 1 ] llama_model_loader: - tensor 220: blk.24.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 221: blk.24.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 222: blk.24.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 223: blk.24.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 224: blk.24.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 225: blk.24.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 226: blk.24.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 227: blk.24.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 228: blk.25.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 229: blk.25.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 230: blk.25.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 231: blk.25.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 232: blk.25.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 233: blk.25.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 234: blk.25.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 235: blk.25.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 236: blk.25.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 237: blk.26.attn_q.weight q4_K [ 5120, | Open In Collab | Open In Collab ->: 5120, 1, 1 ] llama_model_loader: - tensor 220: blk.24.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 221: blk.24.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 222: blk.24.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 223: blk.24.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 224: blk.24.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 225: blk.24.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 226: blk.24.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 227: blk.24.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 228: blk.25.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 229: blk.25.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 230: blk.25.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 231: blk.25.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 232: blk.25.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 233: blk.25.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 234: blk.25.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 235: blk.25.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 236: blk.25.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 237: blk.26.attn_q.weight q4_K [ 5120, |
1,395 | blk.26.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 238: blk.26.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 239: blk.26.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 240: blk.26.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 241: blk.26.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 242: blk.26.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 243: blk.26.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 244: blk.26.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 245: blk.26.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 246: blk.27.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 247: blk.27.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 248: blk.27.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 249: blk.27.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 250: blk.27.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 251: blk.27.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 252: blk.27.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 253: blk.27.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 254: blk.27.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 255: | Open In Collab | Open In Collab ->: blk.26.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 238: blk.26.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 239: blk.26.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 240: blk.26.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 241: blk.26.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 242: blk.26.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 243: blk.26.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 244: blk.26.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 245: blk.26.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 246: blk.27.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 247: blk.27.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 248: blk.27.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 249: blk.27.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 250: blk.27.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 251: blk.27.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 252: blk.27.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 253: blk.27.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 254: blk.27.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 255: |
1,396 | 1 ] llama_model_loader: - tensor 255: blk.28.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 256: blk.28.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 257: blk.28.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 258: blk.28.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 259: blk.28.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 260: blk.28.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 261: blk.28.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 262: blk.28.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 263: blk.28.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 264: blk.29.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 265: blk.29.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 266: blk.29.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 267: blk.29.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 268: blk.29.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 269: blk.29.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 270: blk.29.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 271: blk.29.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 272: blk.29.ffn_norm.weight f32 [ 5120, 1, 1, | Open In Collab | Open In Collab ->: 1 ] llama_model_loader: - tensor 255: blk.28.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 256: blk.28.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 257: blk.28.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 258: blk.28.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 259: blk.28.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 260: blk.28.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 261: blk.28.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 262: blk.28.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 263: blk.28.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 264: blk.29.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 265: blk.29.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 266: blk.29.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 267: blk.29.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 268: blk.29.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 269: blk.29.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 270: blk.29.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 271: blk.29.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 272: blk.29.ffn_norm.weight f32 [ 5120, 1, 1, |
1,397 | f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 273: blk.30.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 274: blk.30.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 275: blk.30.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 276: blk.30.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 277: blk.30.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 278: blk.30.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 279: blk.30.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 280: blk.30.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 281: blk.30.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 282: blk.31.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 283: blk.31.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 284: blk.31.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 285: blk.31.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 286: blk.31.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 287: blk.31.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 288: blk.31.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 289: blk.31.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 290: blk.31.ffn_norm.weight | Open In Collab | Open In Collab ->: f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 273: blk.30.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 274: blk.30.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 275: blk.30.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 276: blk.30.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 277: blk.30.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 278: blk.30.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 279: blk.30.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 280: blk.30.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 281: blk.30.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 282: blk.31.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 283: blk.31.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 284: blk.31.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 285: blk.31.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 286: blk.31.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 287: blk.31.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 288: blk.31.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 289: blk.31.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 290: blk.31.ffn_norm.weight |
1,398 | - tensor 290: blk.31.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 291: blk.32.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 292: blk.32.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 293: blk.32.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 294: blk.32.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 295: blk.32.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 296: blk.32.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 297: blk.32.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 298: blk.32.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 299: blk.32.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 300: blk.33.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 301: blk.33.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 302: blk.33.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 303: blk.33.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 304: blk.33.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 305: blk.33.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 306: blk.33.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 307: blk.33.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - | Open In Collab | Open In Collab ->: - tensor 290: blk.31.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 291: blk.32.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 292: blk.32.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 293: blk.32.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 294: blk.32.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 295: blk.32.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 296: blk.32.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 297: blk.32.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 298: blk.32.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 299: blk.32.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 300: blk.33.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 301: blk.33.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 302: blk.33.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 303: blk.33.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 304: blk.33.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 305: blk.33.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 306: blk.33.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 307: blk.33.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - |
1,399 | 1, 1, 1 ] llama_model_loader: - tensor 308: blk.33.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 309: blk.34.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 310: blk.34.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 311: blk.34.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 312: blk.34.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 313: blk.34.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 314: blk.34.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 315: blk.34.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 316: blk.34.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 317: blk.34.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 318: blk.35.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 319: blk.35.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 320: blk.35.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 321: blk.35.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 322: blk.35.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 323: blk.35.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 324: blk.35.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 325: blk.35.attn_norm.weight f32 [ 5120, | Open In Collab | Open In Collab ->: 1, 1, 1 ] llama_model_loader: - tensor 308: blk.33.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 309: blk.34.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 310: blk.34.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 311: blk.34.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 312: blk.34.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 313: blk.34.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 314: blk.34.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 315: blk.34.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 316: blk.34.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 317: blk.34.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 318: blk.35.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 319: blk.35.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 320: blk.35.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 321: blk.35.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 322: blk.35.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 323: blk.35.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 324: blk.35.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 325: blk.35.attn_norm.weight f32 [ 5120, |
Subsets and Splits